Model Type | |
Use Cases |
Areas: | |
Applications: | assistant-like chat, natural language generation |
|
Primary Use Cases: | |
Limitations: | English-focused, not tested in all languages., Potentially unpredictable outputs. |
|
Considerations: | Follow specific input formatting to align with intended use cases. |
|
|
Additional Notes | Compatible with AutoGPTQ and major GPTQ clients. Choose quantization parameters based on hardware needs. |
|
Supported Languages | |
Training Details |
Data Sources: | Publicly available online data |
|
Data Volume: | |
Methodology: | Pretrained and fine-tuned using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
Context Length: | |
Training Time: | January 2023 to July 2023 |
|
Hardware Used: | |
Model Architecture: | Optimized transformer architecture. |
|
|
Safety Evaluation |
Methodologies: | Internal evaluations library |
|
Findings: | May produce inaccurate, biased or objectionable responses; testing primarily in English. |
|
Risk Categories: | |
Ethical Considerations: | Before deploying applications, perform safety testing tailored to your use case. |
|
|
Responsible Ai Considerations |
Fairness: | Testing primarily in English, does not guarantee unbiased outputs in all languages. |
|
Transparency: | Evaluation data and results are disclosed. |
|
Accountability: | Developers responsible for application-specific safety testing. |
|
Mitigation Strategies: | Community feedback and iterative improvements. |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Select appropriate quantization parameters for VRAM efficiency and accuracy. |
|
|
Release Notes |
Version: | |
Date: | |
Notes: | Pretrained on 2 trillion tokens with fine-tuning using RLHF for dialog applications. |
|
|
|