Training Details |
Data Sources: | allenai/ai2_arc, camel-ai/physics, camel-ai/chemistry, camel-ai/biology, camel-ai/math, metaeval/reclor, openbookqa, mandyyyyii/scibench, derek-thomas/ScienceQA, TIGER-Lab/ScienceEval, jondurbin/airoboros-3.2, LDJnr/Capybara, Cot-Alpaca-GPT4-From-OpenHermes-2.5, STEM-AI-mtl/Electrical-engineering, knowrohit07/saraswati-stem, sablo/oasst2_curated, glaiveai/glaive-code-assistant, lmsys/lmsys-chat-1m, TIGER-Lab/MathInstruct, bigbio/med_qa, meta-math/MetaMathQA-40K, openbookqa, piqa, metaeval/reclor, derek-thomas/ScienceQA, scibench, sciq, Open-Orca/SlimOrca, migtissera/Synthia-v1.3, TIGER-Lab/ScienceEval |
|
Methodology: | Fine-tuning with QLoRA on diverse datasets |
|
Context Length: | |
Hardware Used: | |
Model Architecture: | QLoRA fine-tuned version of Qwen1.5 |
|
|