Model Type | text-generation, instruction-tuned |
|
Use Cases |
Areas: | |
Primary Use Cases: | assistant-like chat, natural language generation tasks |
|
Limitations: | Use limited to English, Potential for unpredictable outputs |
|
Considerations: | Developers should exercise discretion. |
|
|
Supported Languages | |
Training Details |
Data Sources: | |
Data Volume: | <0.003% of Llama-3's original pre-training data |
|
Methodology: | NTK-aware interpolation, Progressive training |
|
Context Length: | |
Hardware Used: | |
Model Architecture: | Auto-regressive transformer |
|
|
Safety Evaluation |
Methodologies: | red teaming exercises, adversarial evaluations |
|
Risk Categories: | CBRNE, Cyber Security, Child Safety |
|
|
Responsible Ai Considerations |
Transparency: | Responsible Use Guide available. |
|
Accountability: | Developers are responsible for application deployment. |
|
Mitigation Strategies: | Safety best practices, Purple Llama tools, Llama Guard |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
|
Release Notes |
Version: | |
Date: | |
Notes: | Model release. Static model. |
|
|
|