Model Type | |
Use Cases |
Areas: | |
Primary Use Cases: | assistant-like chat, natural language generation tasks |
|
Limitations: | Use in languages other than English, Violations of laws or regulations |
|
Considerations: | Follow specific formatting for inputs |
|
|
Additional Notes | The fine-tuning data includes publicly available instruction datasets and human-annotated examples. |
|
Training Details |
Data Sources: | publicly available datasets |
|
Data Volume: | |
Methodology: | Supervised Fine-tuning and Reinforcement Learning with Human Feedback |
|
Hardware Used: | Meta's Research Super Cluster, third-party cloud compute |
|
Model Architecture: | auto-regressive language model with optimized transformer architecture |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
|
Release Notes |
Version: | |
Date: | |
Notes: | Llama 2 models ranging from 7 billion to 70 billion parameters released, optimized for dialogue and alignment with human preferences. |
|
|
|