LLM Name | Zephyr 7B Beta Assistant V1 Gptq |
Repository ๐ค | https://huggingface.co/TeTLAB/zephyr-7b-beta_assistant_v1_gptq |
Merged Model | Yes |
Model Size | 7b |
Required VRAM | 4.2 GB |
Updated | 2025-04-24 |
Maintainer | TeTLAB |
Model Type | mistral |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | MistralForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.41.2 |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...enHermes 2.5 Mistral 7B Marlin | 32K / 4.1 GB | 545 | 2 |
Zephyr 7B Beta Marlin | 32K / 4.1 GB | 104 | 0 |
...ral 7B Instruct V0.3 GPTQ 4bit | 32K / 4.2 GB | 5877 | 18 |
...ral 7B Instruct V0.3 GPTQ 4bit | 32K / 4.2 GB | 2946 | 18 |
...ephyr 7B Beta Channelwise Gptq | 32K / 4 GB | 10850 | 0 |
Mistral 7B Instruct V0.2 GPTQ | 32K / 4.2 GB | 21283 | 51 |
Mistral 7B Instruct V0.3 GPTQ | 32K / 4.2 GB | 1147 | 1 |
...istral 7B Pruned50 GPTQ Marlin | 32K / 4 GB | 5 | 0 |
Mistral 7B Unsloth Gptq 8bit | 32K / 7.7 GB | 6 | 0 |
...lai Mistral 7B V0.1 4 Bit Gptq | 32K / 4.2 GB | 10 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐