LLM Name | Merged Model MoE |
Repository ๐ค | https://huggingface.co/EthanLiu1991/Merged_model_MoE |
Merged Model | Yes |
Model Size | 7b |
Required VRAM | 53.3 GB |
Updated | 2024-09-16 |
Maintainer | EthanLiu1991 |
Model Type | mixtral |
Model Files | |
Model Architecture | MixtralForCausalLM |
License | cc-by-nc-4.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.41.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Multimaster 7B V6 | 32K / 142.5 GB | 3152 | 1 |
Mixtral 7B 8expert | 32K / 93.6 GB | 15574 | 263 |
Laserxtral | 32K / 48.3 GB | 4885 | 78 |
MultiverseBuddy 15B MoE | 32K / 25.8 GB | 6 | 0 |
Mini Mixtral V0.2 | 32K / 25.8 GB | 63 | 3 |
Multilingual Mistral | 32K / 93.5 GB | 674 | 2 |
Lumina 2 | 32K / 37.1 GB | 5 | 0 |
RogerWizard 12B MoE | 32K / 25.8 GB | 1 | 1 |
StarlingMaths 12B MoE | 32K / 25.8 GB | 5 | 0 |
MultiverseMath 12B MoE | 32K / 25.8 GB | 6 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐