Model Type | |
Use Cases |
Areas: | |
Applications: | assistant-like chat, natural language generation tasks |
|
Primary Use Cases: | chat assistant, language tasks |
|
Limitations: | Limited to English, Possibility of generating unpredicted outputs |
|
Considerations: | Specific formatting needed for chat versions |
|
|
Supported Languages | English (optimized for dialogue use cases) |
|
Training Details |
Data Sources: | publicly available online data, publicly available instruction datasets |
|
Data Volume: | |
Methodology: | Supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
Context Length: | |
Training Time: | January 2023 to July 2023 |
|
Hardware Used: | Meta's Research Super Cluster, production clusters, third-party cloud compute |
|
Model Architecture: | auto-regressive language model with optimized transformer architecture |
|
|
Safety Evaluation |
Methodologies: | internal evaluations, automatic safety benchmarks |
|
Findings: | competitive safety benchmarks compared to open-source models |
|
Risk Categories: | potential for inaccurate, biased or objectionable responses |
|
Ethical Considerations: | Developers should perform safety testing tailored to their applications |
|
|
Responsible Ai Considerations |
Fairness: | Testing mostly conducted in English |
|
Transparency: | Model outputs cannot be fully predicted |
|
Accountability: | |
Mitigation Strategies: | Responsible Use Guide provided |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Use recommended formatting for chat versions |
|
|
Release Notes |
Version: | |
Date: | |
Notes: | Improved model safety with community feedback |
|
|
|