Model Type |
| ||||||||||||
Use Cases |
| ||||||||||||
Additional Notes |
| ||||||||||||
Supported Languages |
| ||||||||||||
Training Details |
| ||||||||||||
Safety Evaluation |
| ||||||||||||
Input Output |
|
LLM Name | Codegemma 1.1 2B AWQ |
Repository ๐ค | https://huggingface.co/TechxGenus/codegemma-1.1-2b-AWQ |
Base Model(s) | |
Model Size | 2b |
Required VRAM | 3.1 GB |
Updated | 2024-12-22 |
Maintainer | TechxGenus |
Model Type | gemma |
Model Files | |
AWQ Quantization | Yes |
Quantization Type | awq |
Model Architecture | GemmaForCausalLM |
License | gemma |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.40.0 |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
Vocabulary Size | 256000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
... Codegemma 2B AWQ 4bit Smashed | 8K / 3.1 GB | 1225 | 0 |
Gemma 1.1 2B It AWQ | 8K / 3.1 GB | 22 | 1 |
Gemma 2B It AWQ | 8K / 3.1 GB | 35 | 0 |
Gemma 2B AWQ | 8K / 3.1 GB | 24 | 0 |
Vi Gemma 2B RAG | 8K / 5.1 GB | 891 | 13 |
... 2B It Hermes Function Calling | 8K / 5.1 GB | 21 | 0 |
Octopus V2 Gguf AWQ | 8K / 1.2 GB | 1333 | 7 |
Gemma 2B Bnb 4bit | 8K / 2.1 GB | 3340 | 15 |
Gemma 1.1 2B It Bnb 4bit | 8K / 2.1 GB | 1265 | 4 |
Gemma 2B It Bnb 4bit | 8K / 2.1 GB | 1869 | 18 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐