Training Details |
Data Sources: | ai2_arc, Jondurbin/airoboros-3.2, codeparrot/apps, facebook/belebele, boolq, jondurbin/cinematika-v0.1, drop, lmsys/lmsys-chat-1m, TIGER-Lab/MathInstruct, cais/mmlu, Muennighoff/natural-instructions, openbookqa, piqa, Vezora/Tested-22k-Python-Alpaca, cakiki/rosetta-code, Open-Orca/SlimOrca, spider, squad_v2, migtissera/Synthia-v1.3, datasets/winogrande, nvidia/HelpSteer, Intel/orca_dpo_pairs, unalignment/toxic-dpo-v0.1, jondurbin/truthy-dpo-v0.1, allenai/ultrafeedback_binarized_cleaned, Squish42/bluemoon-fandom-1-1-rp-cleaned, LDJnr/Capybara, JULIELab/EmoBank, kingbri/PIPPA-shareGPT |
|
Methodology: | Experimental fine-tune using bagel. After SFT phase, before DPO has been applied. |
|
Context Length: | |
Training Time: | 4 days, 15 hours, 6 minutes, and 42 seconds |
|
Hardware Used: | |
Model Architecture: | |
|