Model Type | auto-regressive, transformer, instruction tuned, multilingual, text generation |
|
Use Cases |
Areas: | Commercial use, Research use, Education, Climate, Open innovation |
|
Applications: | Chatbots, Synthetic data generation and distillation, Natural language generation tasks |
|
Primary Use Cases: | Assistant-like chat, Multilingual dialogue applications |
|
Limitations: | Use in unsupported languages without further tuning, Violation of laws, Out-of-scope uses as per Acceptable Use Policy |
|
Considerations: | Comprehensive safety and performance testing required before deployment |
|
|
Additional Notes | Model supports a longer context window and leverages Grouped-Query Attention for enhanced inference. |
|
Supported Languages | English (fluent), German (fluent), French (fluent), Italian (fluent), Portuguese (fluent), Hindi (fluent), Spanish (fluent), Thai (fluent) |
|
Training Details |
Data Sources: | publicly available online data |
|
Data Volume: | |
Methodology: | Supervised fine-tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF) |
|
Context Length: | |
Training Time: | |
Hardware Used: | |
Model Architecture: | Optimized transformer architecture using Grouped-Query Attention (GQA) |
|
|
Safety Evaluation |
Methodologies: | red-teaming, adversarial testing |
|
Findings: | |
Risk Categories: | |
Ethical Considerations: | Safety fine-tuning, safety datasets usage |
|
|
Responsible Ai Considerations |
Fairness: | Efforts were made to ensure fairness across languages and tasks |
|
Transparency: | Model capabilities and limitations are communicated openly |
|
Accountability: | Model provided with usage guides and responsible deployment methods |
|
Mitigation Strategies: | Safety datasets, red-teaming, responsible use guides |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Ensure updated software and compatible hardware |
|
|
Release Notes |
Version: | |
Date: | |
Notes: | First release with multilingual support and instruction fine-tuning. |
|
|
|