Gemma Ko 2B by beomi

 ยป  All LLMs  ยป  beomi  ยป  Gemma Ko 2B   URL Share it on

  Autotrain compatible   En   Endpoints compatible   Gemma   Ko   Pytorch   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/beomi/gemma-ko-2b 

Gemma Ko 2B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Gemma Ko 2B (beomi/gemma-ko-2b)

Gemma Ko 2B Parameters and Internals

Model Type 
text generation
Additional Notes 
These models can be used for various applications such as content creation, research, NLP techniques, etc. Known limitations include biases or gaps in training data, context and task complexity, language ambiguity, factual accuracy, and common sense.
Supported Languages 
Korean (fluent), English (fluent)
Input Output 
Input Format:
Text string, such as a question, a prompt, or a document to be summarized.
Output Format:
Generated Korean/English-language text in response to the input.
Performance Tips:
Longer context generally leads to better outputs, up to a certain point.
Release Notes 
Version:
2B
Date:
2024.03.26
Notes:
First release of Gemma-Ko 2B model
LLM NameGemma Ko 2B
Repository ๐Ÿค—https://huggingface.co/beomi/gemma-ko-2b 
Model Size2b
Required VRAM5.1 GB
Updated2024-12-22
Maintainerbeomi
Model Typegemma
Model Files  5.0 GB: 1-of-2   0.1 GB: 2-of-2
Supported Languagesko en
Model ArchitectureGemmaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.38.2
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Torch Data Typebfloat16

Best Alternatives to Gemma Ko 2B

Best Alternatives
Context / RAM
Downloads
Likes
Gemma 1.1 2B It8K / 5.1 GB86515152
Gemma Ko 1.1 2B It8K / 5.1 GB48061
Octopus V28K / 5.1 GB471868
Codegemma 2B8K / 5.1 GB213773
EMO 2B8K / 5.1 GB43271
Gemma 2B Ko V08K / 5 GB25310
Gemma2b Lungcancerqa8K / 3.1 GB762
Gemma 2B Ko Dev Pbmt1928K / 5 GB25301
Gemma 2B Data Std8K / 5.1 GB25371
Geko28K / 5.1 GB160
Note: green Score (e.g. "73.2") means that the model is better than beomi/gemma-ko-2b.

Rank the Gemma Ko 2B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217