Model Type | text generation, decoder-only large language model |
|
Use Cases |
Areas: | Content Creation and Communication, Research and Education |
|
Applications: | Text Generation, Chatbots and Conversational AI, Text Summarization, NLP Research, Language Learning Tools, Knowledge Exploration |
|
Primary Use Cases: | Summarization, Question Answering, Reasoning |
|
Limitations: | Biases in training data, influence of context length, factual inaccuracies, struggles with nuance and open-ended tasks |
|
Considerations: | Developers recommended to perform continuous evaluation, employ de-biasing techniques, and include safety measures for applications. |
|
|
Additional Notes | At release, provides implementation designed for Responsible AI development compared to similarly sized models in the ecosystem. |
|
Supported Languages | |
Training Details |
Data Sources: | Web Documents, Code, Mathematics |
|
Data Volume: | |
Hardware Used: | |
Model Architecture: | |
|
Safety Evaluation |
Risk Categories: | text-to-text content safety, text-to-text representational harms, memorization, large-scale harm |
|
|
Responsible Ai Considerations |
Fairness: | Models underwent careful scrutiny and input data pre-processing described in this card. Continuous monitoring and de-biasing techniques are encouraged. |
|
Transparency: | This model card and technical documentation details the model architecture, capabilities, evaluations. |
|
Accountability: | Developers are encouraged to follow guidelines for responsible use and adhere to specific product policies and application use cases. |
|
Mitigation Strategies: | Filtering harmful content and PII, transparency in documentation, setting guidelines for responsible usage and development. |
|
|
Input Output |
Input Format: | Text string (question, prompt, document) |
|
Accepted Modalities: | |
Output Format: | Generated English-language text |
|
|