Model Type | text generation, decoder-only, large language model |
|
Use Cases |
Areas: | Content Creation, Communication, Research, Education |
|
Applications: | Text Generation, Chatbots and Conversational AI, Text Summarization, NLP Research, Language Learning Tools |
|
Primary Use Cases: | Content creation, research, and education. |
|
Limitations: | Bias and Fairness, Misinformation and Misuse, Transparency and Accountability, Privacy violations |
|
Considerations: | Perform continuous monitoring and implement appropriate content safety safeguards. |
|
|
Supported Languages | |
Training Details |
Data Sources: | Web Documents, Code, Mathematics |
|
Data Volume: | |
Methodology: | |
Hardware Used: | |
Model Architecture: | |
|
Safety Evaluation |
Methodologies: | Text-to-Text Content Safety, Text-to-Text Representational Harms Benchmark, Memorization |
|
Findings: | Acceptable thresholds for categories like child safety, content safety, representational harms, memorization, large-scale harms |
|
Risk Categories: | child safety, content safety, representational harms, memorization, large-scale harms |
|
Ethical Considerations: | Within acceptable thresholds for internal policies |
|
|
Responsible Ai Considerations |
Fairness: | Careful scrutiny, input data pre-processing and posterior evaluations. |
|
Transparency: | This model card summarizes model details. |
|
Accountability: | |
Mitigation Strategies: | Encouraged monitoring and exploration of de-biasing techniques. |
|
|
Input Output |
Input Format: | Text string such as a question, prompt, or document to be summarized. |
|
Accepted Modalities: | |
Output Format: | Generated English-language text. |
|
|
Release Notes |
Version: | |
Date: | |
Notes: | An update over the original instruction-tuned Gemma release, trained using a novel RLHF method with improvements in quality, coding capabilities, factuality, instruction following, and multi-turn conversation quality. |
|
|
|