Model Type | text generation, multimodal |
|
Use Cases |
Areas: | |
Applications: | assistant-like chat, natural language generation |
|
Primary Use Cases: | pretrained models adapted for various NLG tasks |
|
Limitations: | Only tested in English, Not all scenarios addressed. |
|
Considerations: | Developers can fine-tune Llama 3 for languages beyond English, compliance required. |
|
|
Additional Notes | Some trade-off between model helpfulness and alignment likely unavoidable. |
|
Supported Languages | |
Training Details |
Data Sources: | publicly available online data |
|
Data Volume: | |
Methodology: | auto-regressive transformer architecture, supervised fine-tuning, reinforcement learning with human feedback (RLHF) |
|
Context Length: | |
Training Time: | |
Hardware Used: | |
Model Architecture: | optimized transformer architecture |
|
|
Safety Evaluation |
Methodologies: | red teaming, adversarial evaluations |
|
Findings: | residual risks may remain |
|
Risk Categories: | child safety risks, security risks, bias risks |
|
Ethical Considerations: | Openness, inclusivity, and helpfulness valued. |
|
|
Responsible Ai Considerations |
Fairness: | Consideration given to bias and fairness during development. |
|
Transparency: | Safety evaluations and risk assessments detailed. |
|
Accountability: | Developers advised to perform application-specific safety testing. |
|
Mitigation Strategies: | Red teaming, adversarial evaluations, safety mitigations included. |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
|
Release Notes |
Version: | |
Date: | |
Notes: | Released with improved performance for dialogue use cases. |
|
|
|