LLM Name | Zephyr 7B Beta Assistant V1 Gptq |
Repository ๐ค | https://huggingface.co/TeTLAB/zephyr-7b-beta_assistant_v1_gptq |
Merged Model | Yes |
Model Size | 7b |
Required VRAM | 4.2 GB |
Updated | 2025-02-05 |
Maintainer | TeTLAB |
Model Type | mistral |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | MistralForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.41.2 |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Mistral 7B Instruct V0.2 GPTQ | 32K / 4.2 GB | 391638 | 50 |
Mistral 7B Instruct V0.3 GPTQ | 32K / 4.2 GB | 8638 | 0 |
...ral 7B Instruct V0.3 GPTQ 4bit | 32K / 4.2 GB | 1896 | 18 |
...ephyr 7B Beta Channelwise Gptq | 32K / 4 GB | 9922 | 0 |
NeuralBeagle14 7B GPTQ | 32K / 4.2 GB | 16920 | 5 |
...baraHermes 2.5 Mistral 7B GPTQ | 32K / 4.2 GB | 3708 | 56 |
...istral 7B Pruned50 GPTQ Marlin | 32K / 4 GB | 76 | 0 |
...l Neural Chat 7B V3.8 Bit Gptq | 32K / 7.7 GB | 77 | 0 |
...lai Mistral 7B V0.1 4 Bit Gptq | 32K / 4.2 GB | 79 | 0 |
...lai Mistral 7B V0.1 8 Bit Gptq | 32K / 7.7 GB | 78 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐