LLM Name | Gemma 2B Gptq 8bit |
Repository ๐ค | https://huggingface.co/vessl/gemma-2b-gptq-8bit |
Model Size | 2b |
Required VRAM | 3.1 GB |
Updated | 2024-12-22 |
Maintainer | vessl |
Model Type | gemma |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq|8bit |
Model Architecture | GemmaForCausalLM |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.41.1 |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
Vocabulary Size | 256000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Gemma 1.1 2B It GPTQ | 8K / 3.1 GB | 26475 | 1 |
Gemma 2B Gptq 4bit | 8K / 2.1 GB | 19 | 0 |
Gemma 2B GPTQ | 8K / 2.1 GB | 1389 | 1 |
CodeGemma 2B GPTQ | 8K / 3.1 GB | 14 | 1 |
Vi Gemma 2B RAG | 8K / 5.1 GB | 891 | 13 |
... 2B It Hermes Function Calling | 8K / 5.1 GB | 21 | 0 |
Gemma 2B Bnb 4bit | 8K / 2.1 GB | 3340 | 15 |
Gemma 1.1 2B It Bnb 4bit | 8K / 2.1 GB | 1265 | 4 |
Gemma 2B It Bnb 4bit | 8K / 2.1 GB | 1869 | 18 |
My AwesomeFinance Model | 8K / 2.1 GB | 13 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐