Model Type | |
Use Cases |
Areas: | |
Applications: | Natural language generation tasks, Assistant-like chat |
|
Primary Use Cases: | Instruction tuned models for dialogue |
|
Limitations: | Use only in English., Avoid use in ways violating applicable laws. |
|
Considerations: | Developers can fine-tune for languages beyond English with compliance considerations. |
|
|
Additional Notes | Future releases aim to integrate community feedback for improved safety and functionality. |
|
Supported Languages | |
Training Details |
Data Sources: | publicly available online data |
|
Data Volume: | |
Methodology: | Supervised fine-tuning, Reinforcement learning with human feedback (RLHF) |
|
Context Length: | |
Hardware Used: | Meta's Research SuperCluster, Third-party cloud compute |
|
Model Architecture: | Auto-regressive language model using an optimized transformer architecture |
|
|
Safety Evaluation |
Methodologies: | red teaming, adversarial evaluations |
|
Risk Categories: | misinformation, bias, cybersecurity |
|
Ethical Considerations: | Ethical issues related to openness, inclusivity, and helpfulness. |
|
|
Responsible Ai Considerations |
Fairness: | Efforts to reduce false refusals and mitigate biases. |
|
Transparency: | Open source collaboration and community engagement. |
|
Accountability: | Developers and model users are encouraged to implement safety best practices. |
|
Mitigation Strategies: | Resources such as Meta Llama Guard 2 and other safety tools provided. |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
|
Release Notes |
Version: | |
Date: | |
Notes: | Static model trained on offline datasets. |
|
|
|