LLM Name | Gemma 2 It GGUF |
Repository ๐ค | https://huggingface.co/unsloth/gemma-2-it-GGUF |
Model Size | 2b |
Required VRAM | 1.2 GB |
Updated | 2024-10-30 |
Maintainer | unsloth |
Model Type | gemma2 |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | Gemma2ForCausalLM |
License | gemma |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.43.3 |
Vocabulary Size | 256000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Gemma 2 2B It Bnb 4bit | 8K / 2.2 GB | 26473 | 13 |
Gemma 2 2B Id | 8K / GB | 16 | 0 |
Gemma 2 2B Bnb 4bit | 8K / 2.2 GB | 22551 | 6 |
Gemma 2 2B It 4bit | 8K / 1.5 GB | 1922 | 3 |
Gemma 2 2B Jpn It 4bit | 8K / GB | 65 | 2 |
Gemma 2 2B 4bit | 8K / 2.2 GB | 159 | 1 |
Vi Gemma 2 2B Function Calling | 8K / 5.2 GB | 144 | 5 |
Athena Gemma 2 2B It Philos | 8K / 5.2 GB | 20 | 0 |
Gemma 2 2B It 8bit | 8K / 2.8 GB | 54 | 2 |
Gemma 2 2B 4bit | 8K / 1.5 GB | 42 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐