Model Type | auto-regressive language model, instruction fine-tuned |
|
Use Cases |
Areas: | research, commercial applications |
|
Applications: | instruction-based tasks, storytelling |
|
Primary Use Cases: | |
Limitations: | English language models may not cover all scenarios |
|
Considerations: | Developers should perform safety testing and tuning tailored to their specific applications of the model. |
|
|
Additional Notes | Quantized versions and different formats provide flexibility in terms of performance and resource requirements. |
|
Supported Languages | |
Training Details |
Data Sources: | garage-bAInd/Open-Platypus |
|
Methodology: | fine-tuned using LoRA on 1 A100 80GB GPU |
|
Hardware Used: | |
Model Architecture: | LLaMA2 transformer architecture |
|
|
Safety Evaluation |
Risk Categories: | |
Ethical Considerations: | Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. Potential outputs cannot be predicted in advance. |
|
|
Responsible Ai Considerations |
Fairness: | Potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses. |
|
Transparency: | Refer to the Responsible Use Guide for transparency actions. |
|
Accountability: | Developers should perform safety testing and tuning tailored to their specific applications of the model. |
|
Mitigation Strategies: | Safety testing and tuning before deploying any applications. |
|
|
Input Output |
Input Format: | Instruction-response format |
|
Accepted Modalities: | |
Output Format: | |
|