๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
MonarchCoder MoE 2x7B | — | 32K / 22.8 GB | 637 | 1 |
Boundary Hermes Chat 2x7B MoE | — | 32K / 25.5 GB | 321 | 1 |
MixTAO 7Bx2 MoE Instruct V7.0 | — | 32K / 25.7 GB | 627 | 19 |
DARE TIES 13B | — | 32K / 25.7 GB | 6060 | 10 |
MultiMash8 13B Slerp | — | 32K / 25.7 GB | 319 | 0 |
MixTaoTruthful 13B Slerp | — | 32K / 25.7 GB | 317 | 0 |
MultiMash12 13B Slerp | — | 32K / 25.7 GB | 314 | 0 |
MultiMash9 13B Slerp | — | 32K / 25.7 GB | 313 | 0 |
MultiMash11 13B Slerp | — | 32K / 25.7 GB | 311 | 0 |
MultiMash7 12B Slerp | — | 32K / 25.7 GB | 310 | 0 |
LLM Name | MultiMash5 12B Slerp |
Repository | Open on ๐ค |
Base Model(s) | |
Merged Model | Yes |
Model Size | 12.9b |
Required VRAM | 25.7 GB |
Updated | 2024-07-07 |
Maintainer | allknowingroger |
Model Type | mixtral |
Model Files | |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.40.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |