Model Type | |
Use Cases |
Areas: | |
Applications: | Assistant-like chat, NLG tasks |
|
Primary Use Cases: | Instruction tuning for dialogue |
|
Limitations: | Only pre-trained in English; Fine-tuning required for additional languages. |
|
Considerations: | Developers are advised to implement safety checks for their specific applications. |
|
|
Additional Notes | Quantized to FP8 by FriendliAI for efficiency. |
|
Supported Languages | |
Training Details |
Data Sources: | A new mix of publicly available online data |
|
Data Volume: | |
Methodology: | Pre-trained and instruction tuned using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
Training Time: | |
Hardware Used: | |
Model Architecture: | |
|
Safety Evaluation |
Methodologies: | Red teaming, Adversarial evaluations |
|
Findings: | Model showed safety improvements over predecessors |
|
Risk Categories: | CBRNE threats, Cybersecurity, Child Safety |
|
Ethical Considerations: | Focused on reducing harmful outputs and aligning with human preferences. |
|
|
Responsible Ai Considerations |
Fairness: | Open approach for inclusive use cases; Developers encouraged to tailor for fairness. |
|
Transparency: | Open community tools and resources available for evaluation. |
|
Accountability: | Meta facilitates community feedback. |
|
Mitigation Strategies: | Provides safeguards like Meta Llama Guard 2 |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Fine-tuning recommended for specific tasks. |
|
|
Release Notes |
Version: | |
Date: | |
Notes: | Initial release of Llama 3 model family. |
|
|
|