Model Type | |
Use Cases |
Areas: | |
Applications: | text generation, assistant-like chat |
|
Primary Use Cases: | instruction-tuned models are for assistant-like chat., pretrained models for natural language generation tasks. |
|
Limitations: | Not for use violating laws or outside Acceptable Use Policy., Primarily English language. |
|
Considerations: | Compliance with Llama 3 Community License and Acceptable Use Policy is required. |
|
|
Additional Notes | Model was pre-trained with data up until March 2023 for 8B and December 2023 for 70B models. Carbon footprint for training was fully offset. |
|
Supported Languages | |
Training Details |
Data Sources: | publicly available online data |
|
Data Volume: | |
Methodology: | Supervised fine-tuning (SFT) with Reinforcement Learning with Human Feedback (RLHF) |
|
Context Length: | |
Hardware Used: | |
Model Architecture: | Auto-regressive language model using optimized transformer architecture |
|
|
Safety Evaluation |
Ethical Considerations: | Model outputs are provided 'as is' and developers should conduct safety testing. |
|
|
Responsible Ai Considerations |
Fairness: | Model aims to be accessible without unnecessary judgment or bias. |
|
Accountability: | Meta and developers share accountability. |
|
Mitigation Strategies: | Includes safety tools like Llama Guard and Purple Llama for input-output filtering. |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
|
Release Notes |
Date: | |
Notes: | Initial release of Meta Llama 3, available in 8B and 70B sizes. |
|
|
|