๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
MixTAO 7Bx2 MoE V8.1 GGUF | 11 | 612 | 4 GB |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
MonarchCoder MoE 2x7B | — | 32K / 22.8 GB | 726 | 1 |
Boundary Hermes Chat 2x7B MoE | — | 32K / 25.5 GB | 384 | 1 |
MixTAO 7Bx2 MoE Instruct V7.0 | — | 32K / 25.7 GB | 755 | 19 |
DARE TIES 13B | — | 32K / 25.7 GB | 6222 | 10 |
MultiMash5 12B Slerp | — | 32K / 25.7 GB | 387 | 0 |
MultiMash2 12B Slerp | — | 32K / 25.7 GB | 385 | 0 |
MultiMash7 12B Slerp | — | 32K / 25.7 GB | 382 | 0 |
MultiMash6 12B Slerp | — | 32K / 25.7 GB | 380 | 0 |
MultiMash 12B Slerp | — | 32K / 25.7 GB | 379 | 0 |
MultiMash9 13B Slerp | — | 32K / 25.7 GB | 333 | 0 |
LLM Name | MixTAO 7Bx2 MoE V8.1 |
Repository | Open on ๐ค |
Model Size | 12.9b |
Required VRAM | 25.8 GB |
Updated | 2024-07-04 |
Maintainer | zhengr |
Model Type | mixtral |
Model Files | |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.38.1 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |