Model Type | text-to-text, language model |
|
Use Cases |
Areas: | research, commercial applications |
|
Primary Use Cases: | natural language generation |
|
Limitations: | May amplify societal biases and return toxic responses, Inaccurate or omitted key information in responses |
|
Considerations: | Developers should work with internal teams to ensure the model meets industry and use case requirements. |
|
|
Supported Languages | English (high), multilingual (varied) |
|
Training Details |
Data Sources: | webpages, dialogues, articles, legal documents, math texts, science literature, financial documents |
|
Data Volume: | |
Methodology: | |
Context Length: | |
Hardware Used: | |
Model Architecture: | Transformer Decoder (Auto-Regressive Language Model) |
|
|
Responsible Ai Considerations |
Fairness: | The model may reflect biases present in the training data. |
|
Transparency: | Model architecture and training methodologies are described in the report. |
|
Accountability: | Developers using the model should ensure it meets industry requirements and mitigates potential biases. |
|
Mitigation Strategies: | Introduced QA and alignment style data for performance improvements. |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Works well within 8k characters or less. |
|
|