LLM Name | Calmex26merge 12B MoE |
Repository | Open on ๐ค |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 25.8 GB |
Updated | 2024-07-27 |
Maintainer | allknowingroger |
Model Type | mixtral |
Model Files | |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.39.3 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Multimaster 7B V6 | 0.3 | 32K / 142.5 GB | 2741 | 1 |
Mixtral 7B 8expert | 0.3 | 32K / 93.6 GB | 12888 | 260 |
MultiverseBuddy 15B MoE | 0.3 | 32K / 25.8 GB | 241 | 0 |
Mini Mixtral V0.2 | 0.2 | 32K / 25.8 GB | 302 | 3 |
Laserxtral | 0.2 | 32K / 48.3 GB | 935 | 78 |
Lumina 2 | 0.2 | 32K / 37.1 GB | 239 | 0 |
RogerWizard 12B MoE | 0.2 | 32K / 25.8 GB | 235 | 1 |
StarlingMaths 12B MoE | 0.2 | 32K / 25.8 GB | 238 | 0 |
MultiverseMath 12B MoE | 0.2 | 32K / 25.8 GB | 259 | 0 |
WestLakeLaser 12B MoE | 0.2 | 32K / 25.8 GB | 245 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐