๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
...ixtral Instruct 8x7b Zloss AWQ | 64.2 | 32K / 24.7 GB | 10 | 1 |
...utLM Mixtral 8x7B Instruct AWQ | 63.8 | 32K / 24.7 GB | 468 | 2 |
Mixtral 8x7B Instruct V0.1 AWQ | 63.7 | 32K / 24.7 GB | 1025 | 54 |
...Mixtral 8x7B V0.1 Dolly15K AWQ | 63.5 | 32K / 24.7 GB | 6 | 1 |
Mixtral Instruct AWQ | — | 32K / 24.7 GB | 14524 | 39 |
Dolphin 2.7 Mixtral 8x7b AWQ | — | 32K / 24.7 GB | 3985 | 19 |
Dolphin 2.6 Mixtral 8x7b AWQ | — | 32K / 24.7 GB | 20 | 13 |
Dolphin 2.5 Mixtral 8x7b AWQ | — | 32K / 24.7 GB | 8 | 6 |
...0.1 LimaRP ZLoss DARE TIES AWQ | — | 32K / 24.7 GB | 288 | 3 |
...1 Mixtral 8x7b Instruct V3 AWQ | — | 32K / 24.7 GB | 36 | 1 |
LLM Name | Mixtral 8x7B Instruct V0.1 AWQ |
Repository | Open on ๐ค |
Model Size | 6.5b |
Required VRAM | 24.7 GB |
Updated | 2024-07-01 |
Maintainer | ybelkada |
Model Type | mixtral |
Instruction-Based | Yes |
Model Files | |
AWQ Quantization | Yes |
Quantization Type | awq |
Model Architecture | MixtralForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.36.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Initializer Range | 0.02 |
Torch Data Type | float16 |