LLM Name | Bloom 7b1 GPTQ 4bit G128 |
Repository ๐ค | https://huggingface.co/iproskurina/bloom-7b1-GPTQ-4bit-g128 |
Model Name | bloom-7b1 |
Model Creator | bigscience |
Base Model(s) | |
Model Size | 2.9b |
Required VRAM | 7.3 GB |
Updated | 2024-10-04 |
Maintainer | iproskurina |
Model Type | bloom |
Model Files | |
Supported Languages | ak ar as bm bn ca code en es eu fr gu hi id ig ki kn lg ln ml mr ne ny or pa pt rn rw sn st sw ta te tn ts tw ur vi wo xh yo zh zu |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | BloomForCausalLM |
License | bigscience-bloom-rail-1.0 |
Transformers Version | 4.44.2 |
Tokenizer Class | BloomTokenizer |
Padding Token | <pad> |
Vocabulary Size | 250880 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Bloom 7b1 Gptq 4bit | 0K / 7.3 GB | 27 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐