Model Type | text-generation-inference, fine-tuned |
|
Use Cases |
Areas: | medical, farmer, doctor, Mega-Series, Cyber-Series, Role-Play, Self-Rag, ThinkingBot |
|
|
Additional Notes | The model is highly tuned for text generation, role-play, and can maintain personas. It is designed for strategic merging and tuning to maintain different capabilities separately. |
|
Supported Languages | |
Training Details |
Data Sources: | gretelai/synthetic_text_to_sql, HuggingFaceTB/cosmopedia, teknium/OpenHermes-2.5, Open-Orca/SlimOrca, Open-Orca/OpenOrca, cognitivecomputations/dolphin-coder, databricks/databricks-dolly-15k, yahma/alpaca-cleaned, uonlp/CulturaX, mwitiderrick/SwahiliPlatypus, Rogendo/English-Swahili-Sentence-Pairs, ise-uiuc/Magicoder-Evol-Instruct-110K, meta-math/MetaMathQA |
|
Methodology: | |
Context Length: | |
|
Input Output | |