Mistral 7B MoEified 8x by kalomaze

 ยป  All LLMs  ยป  kalomaze  ยป  Mistral 7B MoEified 8x   URL Share it on

  Arxiv:2303.01610   Autotrain compatible   Endpoints compatible   F16   Ggml   Gguf   Mixtral   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Mistral 7B MoEified 8x Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mistral 7B MoEified 8x (kalomaze/Mistral-7b-MoEified-8x)

Mistral 7B MoEified 8x Parameters and Internals

Model Type 
dense language model
Use Cases 
Areas:
Research
Primary Use Cases:
Adaptive computation for efficient token prediction
Additional Notes 
The method involves modifying the dense language model by dividing MLP layers into "experts" and initializing router layers for unbiased expert activation.
Training Details 
Methodology:
Expert layers division and router layers initialization for equal expert usage.
Model Architecture:
Slicing individual MLP layers into multiple experts with router layers initialized to ensure equal expert usage.
LLM NameMistral 7B MoEified 8x
Repository ๐Ÿค—https://huggingface.co/kalomaze/Mistral-7b-MoEified-8x 
Model Size7b
Required VRAM14.6 GB
Updated2025-03-13
Maintainerkalomaze
Model Typemixtral
Model Files  14.5 GB   1.0 GB: 1-of-15   1.0 GB: 2-of-15   1.0 GB: 3-of-15   1.0 GB: 4-of-15   1.0 GB: 5-of-15   1.0 GB: 6-of-15   1.0 GB: 7-of-15   1.0 GB: 8-of-15   1.0 GB: 9-of-15   1.0 GB: 10-of-15   1.0 GB: 11-of-15   1.0 GB: 12-of-15   1.0 GB: 13-of-15   1.0 GB: 14-of-15   0.6 GB: 15-of-15
GGML QuantizationYes
GGUF QuantizationYes
Quantization Typeggml|gguf
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.41.1
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Mistral 7B MoEified 8x

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 7Bx2 MoE GGUF32K / 4.8 GB3583
Laserxtral GGUF32K / 8.8 GB15620
...izardLM 2 4x7B MoE EXL2 3 0bpw32K / 9.4 GB141
Buttercup 4x7B 6bpw EXL232K / 18.4 GB82
BurningBruce 003 EXL2 B8.032K / 24.3 GB131
...ixtral 2x7b DPO 8.0bpw H8 EXL232K / 13 GB204
...ixtral 2x7b DPO 5.0bpw H6 EXL232K / 8.3 GB101
...ixtral 2x7b DPO 4.0bpw H6 EXL232K / 6.7 GB91
...ixtral 2x7b DPO 6.0bpw H6 EXL232K / 9.8 GB81
...t V0.2 2x7B MoE 6.0bpw H6 EXL232K / 9.9 GB111

Rank the Mistral 7B MoEified 8x Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45005 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227