Model Type | auto-regressive language model |
|
Use Cases |
Areas: | |
Applications: | assistant-like chat, natural language generation |
|
Primary Use Cases: | conversational AI, text generation |
|
Limitations: | Not to be used in violation of laws or Acceptable Use Policy |
|
Considerations: | Models should be tuned and assessed for specific applications. |
|
|
Additional Notes | Future versions will improve safety with community feedback. |
|
Training Details |
Data Sources: | publicly available online data, publicly available instruction datasets, over 10M human-annotated examples |
|
Data Volume: | |
Methodology: | supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF) |
|
Context Length: | |
Hardware Used: | |
Model Architecture: | optimized transformer architecture |
|
|
Safety Evaluation |
Methodologies: | red teaming exercises, adversarial evaluations |
|
Findings: | |
Risk Categories: | misuse, cybersecurity, child safety |
|
Ethical Considerations: | responsible deployment, safety standards |
|
|
Responsible Ai Considerations |
Fairness: | integration of Llama Guard, Code Shield |
|
Transparency: | Updated Responsible Use Guide |
|
Mitigation Strategies: | safety tools like Llama Guard and Purple Llama recommend |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Optimized for dialogue. Use safety protocols and tuning for application-specific use. |
|
|
Release Notes |
Version: | |
Date: | |
Notes: | Release of Llama 3 large language models |
|
|
|