Model Type |
| ||||||
Additional Notes |
| ||||||
Supported Languages |
| ||||||
Training Details |
| ||||||
Input Output |
|
LLM Name | Mixtral 8x22B V0.1 AWQ |
Repository ๐ค | https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1-AWQ |
Model Name | Mixtral-8x22B-v0.1-AWQ |
Model Creator | v2ray |
Base Model(s) | |
Model Size | 19.2b |
Required VRAM | 73.7 GB |
Updated | 2025-01-20 |
Maintainer | mistral-community |
Model Type | mixtral |
Model Files | |
Supported Languages | en es de it fr |
AWQ Quantization | Yes |
Quantization Type | awq |
Model Architecture | MixtralForCausalLM |
Context Length | 65536 |
Model Max Length | 65536 |
Transformers Version | 4.38.2 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...olphin 2.9.2 Mixtral 8x22b AWQ | 64K / 73.7 GB | 4904 | 0 |
WizardLM 2 8x22B AWQ | 64K / 73.7 GB | 9279 | 12 |
...ixtral 8x22B Instruct V0.1 AWQ | 64K / 73.7 GB | 138 | 10 |
Karasu Mixtral 8x22B V0.1 AWQ | 64K / 73.7 GB | 13 | 7 |
Zephyr Orpo 141B A35b V0.1 AWQ | 64K / 73.7 GB | 23 | 2 |
... 8x22B Instruct V0.1 GPTQ 4bit | 64K / 74.1 GB | 180 | 1 |
MixTAO 19B Pass | 32K / 38.1 GB | 26 | 1 |
Multimerge 19B Pass | 32K / 38 GB | 10 | 0 |
Lorge 2x7B UAMM | 32K / 38.2 GB | 16 | 0 |
Mistralmath 15B Pass | 32K / 38.5 GB | 11 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐