Training Details |
Data Sources: | Replete-AI/code_bagel_hermes-2.5, Replete-AI/code_bagel, Replete-AI/OpenHermes-2.5-Uncensored, teknium/OpenHermes-2.5, layoric/tiny-codes-alpaca, glaiveai/glaive-code-assistant-v3, ajibawa-2023/Code-290k-ShareGPT, TIGER-Lab/MathInstruct, chargoddard/commitpack-ft-instruct-rated, iamturun/code_instructions_120k_alpaca, ise-uiuc/Magicoder-Evol-Instruct-110K, cognitivecomputations/dolphin-coder, nickrosh/Evol-Instruct-Code-80k-v1, coseal/CodeUltraFeedback_binarized, glaiveai/glaive-function-calling-v2, CyberNative/Code_Vulnerability_Security_DPO, jondurbin/airoboros-2.2, camel-ai, lmsys/lmsys-chat-1m, CollectiveCognition/chats-data-2023-09-22, CoT-Alpaca-GPT4, WizardLM/WizardLM_evol_instruct_70k, WizardLM/WizardLM_evol_instruct_V2_196k, teknium/GPT4-LLM-Cleaned, GPTeacher, OpenGPT, meta-math/MetaMathQA, Open-Orca/SlimOrca, garage-bAInd/Open-Platypus, anon8231489123/ShareGPT_Vicuna_unfiltered, Unnatural-Instructions-GPT4 |
|
Data Volume: | |
Methodology: | Uncensored training with deduplication |
|
Context Length: | |
|