Model Type | instruction, conversational, coding, function calling |
|
Use Cases |
Areas: | research, commercial applications |
|
Applications: | instruction-following, conversational agents, coding assistants, function calling |
|
Primary Use Cases: | chatbots, programming, AI assistance |
|
Limitations: | Uncensored model, implement own alignment layer, Ensure responsible use |
|
Considerations: | Implement your own mitigation strategies. |
|
|
Additional Notes | Utilizes PEFT layer replication at inference to increase parameter count. Adapter method reduces VRAM usage. Based on Unsloth's Mistralfied Phi-3-Instruct-4k. Acknowledgements to Crusoe Cloud for hardware support. |
|
Supported Languages | |
Training Details |
Data Sources: | cognitivecomputations/Dolphin-2.9, teknium/OpenHermes-2.5, m-a-p/CodeFeedback-Filtered-Instruction, cognitivecomputations/dolphin-coder, cognitivecomputations/samantha-data, microsoft/orca-math-word-problems-200k, Locutusque/function-calling-chatml, internlm/Agent-FLAN |
|
Methodology: | qLoRA fine-tuning with 4k sequence length |
|
Context Length: | |
Training Time: | |
Hardware Used: | |
|
Responsible Ai Considerations |
Fairness: | Dataset was filtered to remove alignment and bias. |
|
Transparency: | Read blog post about uncensored models. |
|
Accountability: | Users are responsible for content created. |
|
Mitigation Strategies: | Implement your own alignment layer before exposing the model as a service. |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
|