LLM Name | Gemma 2 9B It AWQ 4bit |
Repository ๐ค | https://huggingface.co/TitanML/gemma-2-9b-it-AWQ-4bit |
Model Size | 9b |
Required VRAM | 8 GB |
Updated | 2024-07-04 |
Maintainer | TitanML |
Model Type | gemma2 |
Model Files | |
AWQ Quantization | Yes |
Quantization Type | awq|4bit |
Model Architecture | Gemma2ForCausalLM |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.42.0 |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
Vocabulary Size | 256000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Gemma 2 9B It AWQ INT4 | 8K / GB | 48 | 2 |
Gemma 2 9B It Bnb 4bit | 8K / 6.1 GB | 24359 | 17 |
Gemma 2 9B Bnb 4bit | 8K / 6.1 GB | 26422 | 22 |
NEPALI LLM | 8K / 20.5 GB | 41 | 0 |
Nepali LLM | 8K / 20.5 GB | 11 | 0 |
Athena Gemma 2 9B It Philos | 8K / 18.6 GB | 7 | 0 |
Athena Gemma 2 9B It | 8K / 18.6 GB | 1 | 2 |
Gemma 2 9B Bangla 16bit | 8K / 18.6 GB | 6 | 0 |
EpistemeAI Codegemma 2 9B | 8K / 18.6 GB | 6 | 2 |
Vi Gemma 2 9B Function Calling | 8K / 18.6 GB | 9 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐