Codegemma 1.1 2B by google

 ยป  All LLMs  ยป  google  ยป  Codegemma 1.1 2B   URL Share it on

  Autotrain compatible   Endpoints compatible   Gemma   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/google/codegemma-1.1-2b 

Codegemma 1.1 2B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Codegemma 1.1 2B (google/codegemma-1.1-2b)

Codegemma 1.1 2B Parameters and Internals

Model Type 
text-to-text, text-to-code
Use Cases 
Areas:
research, commercial applications
Applications:
code completion, code generation, code conversation, code education
Primary Use Cases:
interactive code learning, syntax correction, coding practice
Limitations:
Large Language Models (LLMs) have limitations based on their training data.
Additional Notes 
TPU hardware was used for training. The model focused on fitness in real-world applications with structured examples using heuristic techniques.
Supported Languages 
English (fluent)
Training Details 
Data Sources:
publicly available code repositories, open source mathematics datasets, synthetically generated code
Data Volume:
500 to 1000 billion tokens
Methodology:
FIM, PSM, SPM
Hardware Used:
TPUv5e
Safety Evaluation 
Methodologies:
red-teaming, structured evaluations
Findings:
within acceptable thresholds for meeting internal policies
Risk Categories:
content safety, representational harms, child safety
Responsible Ai Considerations 
Fairness:
Evaluated with structured evaluations and internal red-teaming
Accountability:
Google
Input Output 
Input Format:
code prefix and/or suffix, or natural language text or prompt
Output Format:
fill-in-the-middle code completion, code and natural language
Performance Tips:
Provide a list of terminators to the `generate` function to ensure generation stops at the first delimiter.
Release Notes 
Version:
1.1
Date:
Current
Notes:
Performance metrics and comparisons provided.
LLM NameCodegemma 1.1 2B
Repository ๐Ÿค—https://huggingface.co/google/codegemma-1.1-2b 
Model Size2b
Required VRAM5.1 GB
Updated2025-02-05
Maintainergoogle
Model Typegemma
Model Files  5.0 GB: 1-of-2   0.1 GB: 2-of-2
Model ArchitectureGemmaForCausalLM
Licensegemma
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Torch Data Typebfloat16

Quantized Models of the Codegemma 1.1 2B

Model
Likes
Downloads
VRAM
Codegemma 1.1 2B 4bit1101 GB
Codegemma 1.1 2B 8bit192 GB
Codegemma 1.1 2B AWQ053 GB

Best Alternatives to Codegemma 1.1 2B

Best Alternatives
Context / RAM
Downloads
Likes
Gemma 1.1 2B It8K / 5.1 GB93340154
Codegemma 2B8K / 5.1 GB853778
Gemma Ko 1.1 2B It8K / 5.1 GB23911
EMO 2B8K / 5.1 GB43042
Octopus V28K / 5.1 GB935872
Gemma2b Lungcancerqa8K / 3.1 GB842
LION Gemma 2B Sft V1.08K / 5.1 GB1040
Gemma 2B Orpo8K / 5.1 GB14828
2B Or Not 2B8K / 5.1 GB5426
Gemma 2B Ko V08K / 5 GB2230
Note: green Score (e.g. "73.2") means that the model is better than google/codegemma-1.1-2b.

Rank the Codegemma 1.1 2B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227