Model Type |
| |
Additional Notes |
| |
Supported Languages |
|
LLM Name | Openbuddy Mixtral 7bx8 V18.1 32K Gptq |
Repository ๐ค | https://huggingface.co/OpenBuddy/openbuddy-mixtral-7bx8-v18.1-32k-gptq |
Required VRAM | 24.8 GB |
Updated | 2025-02-22 |
Maintainer | OpenBuddy |
Model Type | mixtral |
Model Files | |
Supported Languages | zh en fr de ja ko it ru |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.38.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Beginning of Sentence Token | <s> |
End of Sentence Token | </s> |
Unk Token | <unk> |
Vocabulary Size | 36608 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...y Mixtral 22bx8 V21.1 65K Gptq | 64K / GB | 5 | 0 |
LHK DPO V1 GPTQ 4bit | 32K / 7.8 GB | 5 | 1 |
Mixtral 8x7B V0.1 Int8 GPTQ | 32K / GB | 12 | 2 |
Blue Orchid 2x7b GPTQ | 8K / 7.1 GB | 56 | 1 |
...oE V0.1 DPO F16 5.0bpw H6 EXL2 | 195K / 38.8 GB | 10 | 0 |
...oE V0.1 DPO F16 4.0bpw H6 EXL2 | 195K / 31.3 GB | 8 | 0 |
...2 Mixtral 8x22b 6.0bpw H8 EXL2 | 64K / 105.8 GB | 7 | 1 |
WizardLM 2 8x22 EXL2 4.0bpw | 64K / 70.9 GB | 8 | 1 |
...rdLM 2 8x22B Beige EXL2 5.0bpw | 64K / 88.4 GB | 17 | 0 |
...M 2 8x22B Beige 4.0bpw H6 EXL2 | 64K / 70.8 GB | 13 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐