Model Type | large language model, instruction tuned, text generation |
|
Use Cases |
Areas: | |
Applications: | assistant-like chat, natural language generation |
|
Primary Use Cases: | Pretrained models for dialogue; Instruction tuned for specific applications. |
|
Limitations: | Use outside English requires compliance with Acceptable Use Policy. |
|
Considerations: | Use should align with the Llama 3 policies and guidelines. |
|
|
Additional Notes | Future versions will incorporate community feedback for model improvements. |
|
Supported Languages | English (fully supported; other languages may require fine-tuning) |
|
Training Details |
Data Sources: | publicly available online data |
|
Data Volume: | |
Methodology: | supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
Context Length: | |
Training Time: | |
Hardware Used: | |
Model Architecture: | optimized transformer architecture |
|
|
Safety Evaluation |
Methodologies: | red-teaming, adversarial evaluations |
|
Findings: | |
Risk Categories: | child safety, cyber security |
|
Ethical Considerations: | Limitations and misuses evaluated; developers encouraged to follow Responsible Use Guide. |
|
|
Responsible Ai Considerations |
Fairness: | Designed to serve diverse backgrounds and perspectives. |
|
Transparency: | Open approach to AI; community involvement encouraged. |
|
Accountability: | Developers should perform safety testing before deployment. |
|
Mitigation Strategies: | Use of Meta Llama Guard and Code Shield safeguards. |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | Generates text and code only. |
|
|