๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
...ixtral 8x7B Instruct V0.1 GGUF | 68.2 | 0K / 15.6 GB | 566 | 574 |
...ixtral 8x7B Instruct V0.1 GGUF | 68.2 | 0K / 17.3 GB | 41 | 2 |
Tinyllama Evol Instruct | — | 0K / 0.1 GB | 17 | 0 |
Dolphin 2 6 Phi 2 GGUF | — | 0K / 1.2 GB | 337 | 68 |
Phi 3 Mini 4K Instruct GGUF | — | 0K / 1.4 GB | 119 | 12 |
...i 3 Mini 4K Instruct V0.3 GGUF | — | 0K / 1.4 GB | 29 | 5 |
...i 3 Mini 4K Instruct V0.2 GGUF | — | 0K / 1.4 GB | 197 | 2 |
...i 3 Mini 4K Instruct V0.1 GGUF | — | 0K / 1.4 GB | 187 | 1 |
Mistral7b Instruct Unreal Gguf | — | 0K / 1.6 GB | 557 | 0 |
Lora | — | 0K / 2.2 GB | 24 | 0 |
LLM Name | Dolphin 2.6 Mixtral 8x7b GGUF |
Repository | Open on ๐ค |
Model Name | Dolphin 2.6 Mixtral 8X7B |
Model Creator | Cognitive Computations |
Base Model(s) | |
Required VRAM | 15.6 GB |
Updated | 2024-07-07 |
Maintainer | TheBloke |
Model Type | mixtral |
Instruction-Based | Yes |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |
License | apache-2.0 |