LLM Name | Dolphin 2.9.2 Mixtral 8x22b AWQ |
Repository ๐ค | https://huggingface.co/leptonai/dolphin-2.9.2-mixtral-8x22b-awq |
Model Size | 19.2b |
Required VRAM | 73.7 GB |
Updated | 2025-02-05 |
Maintainer | leptonai |
Model Type | mixtral |
Model Files | |
AWQ Quantization | Yes |
Quantization Type | awq |
Model Architecture | MixtralForCausalLM |
Context Length | 65536 |
Model Max Length | 65536 |
Transformers Version | 4.37.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | </s> |
Vocabulary Size | 32002 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
WizardLM 2 8x22B AWQ | 64K / 73.7 GB | 36920 | 12 |
Mixtral 8x22B V0.1 AWQ | 64K / 73.7 GB | 4069 | 36 |
...ixtral 8x22B Instruct V0.1 AWQ | 64K / 73.7 GB | 272 | 10 |
Zephyr Orpo 141B A35b V0.1 AWQ | 64K / 73.7 GB | 22 | 2 |
Karasu Mixtral 8x22B V0.1 AWQ | 64K / 73.7 GB | 4 | 7 |
... 8x22B Instruct V0.1 GPTQ 4bit | 64K / 74.1 GB | 223 | 1 |
MixTAO 19B Pass | 32K / 38.1 GB | 8 | 1 |
Multimerge 19B Pass | 32K / 38 GB | 10 | 0 |
Lorge 2x7B UAMM | 32K / 38.2 GB | 16 | 0 |
Mistralmath 15B Pass | 32K / 38.5 GB | 11 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐