Model Type |
| ||||||||||||
Input Output |
|
LLM Name | Blue Orchid 2x7b GPTQ |
Repository ๐ค | https://huggingface.co/LoneStriker/Blue-Orchid-2x7b-GPTQ |
Required VRAM | 7.1 GB |
Updated | 2025-02-22 |
Maintainer | LoneStriker |
Model Type | mixtral |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.37.1 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...y Mixtral 22bx8 V21.1 65K Gptq | 64K / GB | 5 | 0 |
LHK DPO V1 GPTQ 4bit | 32K / 7.8 GB | 5 | 1 |
...dy Mixtral 7bx8 V18.1 32K Gptq | 32K / 24.8 GB | 33 | 3 |
Mixtral 8x7B V0.1 Int8 GPTQ | 32K / GB | 12 | 2 |
...oE V0.1 DPO F16 5.0bpw H6 EXL2 | 195K / 38.8 GB | 10 | 0 |
...oE V0.1 DPO F16 4.0bpw H6 EXL2 | 195K / 31.3 GB | 8 | 0 |
...2 Mixtral 8x22b 6.0bpw H8 EXL2 | 64K / 105.8 GB | 7 | 1 |
WizardLM 2 8x22 EXL2 4.0bpw | 64K / 70.9 GB | 8 | 1 |
...rdLM 2 8x22B Beige EXL2 5.0bpw | 64K / 88.4 GB | 17 | 0 |
...M 2 8x22B Beige 4.0bpw H6 EXL2 | 64K / 70.8 GB | 13 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐