Recurrentgemma 9B by google

 ยป  All LLMs  ยป  google  ยป  Recurrentgemma 9B   URL Share it on

  Arxiv:1705.03551   Arxiv:1804.06876   Arxiv:1804.09301   Arxiv:1809.02789   Arxiv:1811.00937   Arxiv:1904.09728   Arxiv:1905.07830   Arxiv:1905.10044   Arxiv:1907.10641   Arxiv:1911.01547   Arxiv:1911.11641   Arxiv:2009.03300   Arxiv:2009.11462   Arxiv:2101.11718   Arxiv:2103.03874   Arxiv:2107.03374   Arxiv:2108.07732   Arxiv:2109.07958   Arxiv:2110.08193   Arxiv:2110.14168   Arxiv:2203.09509   Arxiv:2206.04615   Arxiv:2304.06364   Arxiv:2402.19427   Autotrain compatible   Endpoints compatible   Recurrent gemma   Region:us   Safetensors   Sharded   Tensorflow

Recurrentgemma 9B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Recurrentgemma 9B (google/recurrentgemma-9b)

Recurrentgemma 9B Parameters and Internals

Model Type 
text generation, summarization, question answering, reasoning
Use Cases 
Areas:
content creation, communication, research, education
Applications:
text generation, chatbots, conversational AI, text summarization
Primary Use Cases:
question answering, summarization, reasoning
Limitations:
Influenced by training data biases, Challenges with open-ended or complex tasks
Considerations:
Model performance impacted by prompt clarity and context length.
Supported Languages 
English (proficient)
Training Details 
Data Sources:
Gemma model family data sources
Methodology:
Recurrent architecture with pre-training and instruction-tuning.
Hardware Used:
TPUv5e
Model Architecture:
Recurrent architecture
Safety Evaluation 
Methodologies:
internal red-teaming, structured evaluations
Risk Categories:
text-to-text content safety, representational harms, memorization, large-scale harm
Ethical Considerations:
The model adheres to Google's internal safety policies.
Responsible Ai Considerations 
Fairness:
Evaluated against benchmarks like WinoBias and BBQ Dataset for representational harms.
Transparency:
Details provided in the model card and evaluation processes.
Accountability:
Accountability not explicitly mentioned.
Mitigation Strategies:
Provides content safety mechanisms and guidelines.
Input Output 
Input Format:
Text string (question, prompt, document)
Accepted Modalities:
text
Output Format:
Generated English-language text
LLM NameRecurrentgemma 9B
Repository ๐Ÿค—https://huggingface.co/google/recurrentgemma-9b 
Model Size9b
Required VRAM19.3 GB
Updated2025-02-05
Maintainergoogle
Model Typerecurrent_gemma
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   4.4 GB: 4-of-4
Model ArchitectureRecurrentGemmaForCausalLM
Licensegemma
Transformers Version4.42.0.dev0
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Torch Data Typebfloat16

Best Alternatives to Recurrentgemma 9B

Best Alternatives
Context / RAM
Downloads
Likes
Recurrentgemma 9B It0K / 19.3 GB974150
Google Recurrentgemma 9B 4Q0K / 6.1 GB780
Recurrentgemma 9B It0K / 19.3 GB63
Recurrentgemma 9B0K / 19.3 GB81
Recurrentgemma 9B It Bnb 4bit0K / 6.4 GB100
Note: green Score (e.g. "73.2") means that the model is better than google/recurrentgemma-9b.

Rank the Recurrentgemma 9B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227