Model Type | text-to-text, decoder-only, large language models |
|
Use Cases |
Areas: | Research, Commercial Applications |
|
Applications: | Text Generation, Chatbots and Conversational AI, Text Summarization |
|
Primary Use Cases: | NLP Research, Language Learning Tools, Knowledge Exploration |
|
Limitations: | Training data quality, Context and task complexity, Language ambiguity and nuance, Factual accuracy, Common sense application |
|
Considerations: | Training data quality, scope, and context length influence capabilities. |
|
|
Additional Notes | These models provide high-performance open large language model implementations designed for responsible AI development. |
|
Supported Languages | |
Training Details |
Data Sources: | Web Documents, Code, Mathematics |
|
Data Volume: | |
Hardware Used: | |
|
Safety Evaluation |
Methodologies: | structured evaluations, internal red-teaming testing |
|
Findings: | acceptable thresholds met for categories such as child safety, content safety, representational harms, memorization, large-scale harms. |
|
Risk Categories: | Text-to-Text Content Safety, Text-to-Text Representational Harms, Memorization, Large-scale harm |
|
Ethical Considerations: | Ethical evaluation methods include structured evaluations and red-teaming testing. |
|
|
Responsible Ai Considerations |
Fairness: | Careful scrutiny, input data pre-processing and evaluations for bias and fairness. |
|
Transparency: | Model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. |
|
Accountability: | |
Mitigation Strategies: | Continuous monitoring and de-biasing techniques. |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Longer context generally leads to better outputs. |
|
|
Release Notes |
Version: | |
Notes: | Version with added chatml tokens for finetuning. |
|
Version: | |
Notes: | Initial release of the Gemma PT 9B model. |
|
|
|