Model Type | Pretrained, Fine-tuned generative text models |
|
Use Cases |
Areas: | |
Primary Use Cases: | Assistant-like chat, Natural language generation tasks |
|
Limitations: | English only, Subject to Acceptable Use Policy, Potential for bias and inaccurate responses |
|
Considerations: | Perform application-specific safety testing |
|
|
Additional Notes | Llama 2 70B uses Grouped-Query Attention (GQA) for improved inference scalability |
|
Supported Languages | English (Optimized for dialogue use cases) |
|
Training Details |
Data Sources: | Publicly available online data |
|
Data Volume: | |
Methodology: | Supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
Context Length: | |
Hardware Used: | |
Model Architecture: | Auto-regressive language model with an optimized transformer architecture |
|
|
Safety Evaluation |
Ethical Considerations: | Testing conducted mainly in English; potential for biased or objectionable responses; safety testing recommended before deployment |
|
|
Responsible Ai Considerations |
Mitigation Strategies: | Safety testing and tuning before deployment |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
|