Model Type | |
Use Cases |
Primary Use Cases: | Natural language generation tasks., Dialogue use cases for fine-tuned variants. |
|
Limitations: | Tested primarily in English; may not work predictably for other languages., Potential for inaccurate or biased outputs. |
|
Considerations: | Follow Metaβs Responsible Use Guide. |
|
|
Additional Notes | The model's fine-tuned variants (Llama-2-Chat) are optimized for dialogue applications and exhibit improved safety and helpfulness criteria. |
|
Training Details |
Data Volume: | |
Methodology: | Pretraining on publicly available online data and fine-tuning with supervision and reinforcement learning with human feedback. |
|
Context Length: | |
Training Time: | January 2023 to July 2023 |
|
Hardware Used: | Meta's Research Super Cluster, A100-80GB GPUs |
|
Model Architecture: | Auto-regressive language model using an optimized transformer design. |
|
|
Responsible Ai Considerations |
Mitigation Strategies: | Perform safety testing and tuning specific to applications. |
|
|