๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
...ruct Malayalam Model Vllm 4bit | — | 16K / 5.6 GB | 28 | 0 |
Gemma Ko 7B Instruct V0.62 | — | 8K / 17 GB | 2666 | 2 |
Gemma Ko 7B Instruct V0.71 | — | 8K / 17 GB | 1233 | 1 |
Gemma Ko 7B Instruct V0.50 | — | 8K / 17 GB | 9 | 0 |
Gemma Ko 7B Instruct V0.52 | — | 8K / 17 GB | 8 | 0 |
RoGemma 7B Instruct | — | 8K / 17.1 GB | 59 | 1 |
...tuned Open Korean Instructions | — | 8K / 17.1 GB | 1 | 1 |
Zephyr Gemma 7B Telugu | — | 8K / 17.1 GB | 1 | 1 |
X Instruction 7B 10langs | — | 8K / 17.1 GB | 7 | 0 |
X Instruction 7B Ta | — | 8K / 17.1 GB | 6 | 0 |
LLM Name | Gemma 7B Instruct GPTQ 4bit |
Repository | Open on ๐ค |
Model Size | 7b |
Required VRAM | 5.6 GB |
Updated | 2024-07-05 |
Maintainer | stan-hua |
Model Type | gemma |
Instruction-Based | Yes |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | GemmaForCausalLM |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.41.1 |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
Vocabulary Size | 256000 |
Initializer Range | 0.02 |
Torch Data Type | float16 |