๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
MixTAO 7Bx2 MoE V8.1 GGUF | 68 | 0K / 4.8 GB | 801 | 10 |
FusionNet 34Bx2 MoE GGUF | 67.6 | 0K / 22.4 GB | 274 | 5 |
...AO 7Bx2 MoE Instruct V7.0 GGUF | 67.1 | 0K / 4.8 GB | 376 | 10 |
Helion 4x34B GGUF | 66.2 | 0K / 41.5 GB | 33 | 3 |
Cosmosis 3x34B GGUF | 66.1 | 0K / 31.9 GB | 90 | 6 |
...Top 5x7B Instruct S5 V0.1 GGUF | 65.9 | 0K / 2.7 GB | 159 | 1 |
...eTop 5x7B Instruct T V0.1 GGUF | 65.7 | 0K / 2.7 GB | 170 | 0 |
...Top 5x7B Instruct S4 V0.1 GGUF | 65.7 | 0K / 2.7 GB | 162 | 0 |
Go Bruins V2.1.1 GGUF | 65.7 | 0K / 3.1 GB | 356 | 7 |
Quantum DPO V0.1 GGUF | 65.7 | 0K / 3.1 GB | 246 | 1 |
LLM Name | Mixtral 8x7B V0.1 GGUF |
Repository | Open on ๐ค |
Model Name | Mixtral 8X7B v0.1 |
Model Creator | Mistral AI_ |
Base Model(s) | |
Required VRAM | 15.6 GB |
Updated | 2024-06-24 |
Maintainer | TheBloke |
Model Type | mixtral |
Model Files | |
Supported Languages | fr it de es en |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |
License | apache-2.0 |