๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
...AO 7Bx2 MoE Instruct V7.0 GGUF | 10 | 339 | 4 GB |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Mixtral 7Bx2 MoE 13B | — | 32K / 25.8 GB | 741 | 7 |
MemGPT DPO MoE Test | — | 32K / 25.8 GB | 1 | 5 |
...tral 7B Instruct V0.2 2x7B MoE | — | 32K / 25.8 GB | 1630 | 4 |
Mistral Math 2x7b Mix | — | 32K / 25.8 GB | 445 | 4 |
Megatron V3 2x7B | — | 32K / 25.8 GB | 721 | 3 |
...tral Instruct MoE Experimental | — | 32K / 25.8 GB | 723 | 2 |
MoEstral 2x7B | — | 32K / 25.8 GB | 11 | 2 |
Rain 2x7B MoE 32K V0.1 | — | 32K / 25.8 GB | 1 | 2 |
Mistral 2x7b V0.1 | — | 32K / 25.8 GB | 3 | 1 |
My Mixtral 2x7B | — | 32K / 25.8 GB | 3 | 1 |
LLM Name | MixTAO 7Bx2 MoE Instruct V7.0 |
Repository | Open on ๐ค |
Model Size | 12.9b |
Required VRAM | 25.7 GB |
Updated | 2024-07-04 |
Maintainer | zhengr |
Model Type | mixtral |
Instruction-Based | Yes |
Model Files | |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.38.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |