Mixtral 8x22B V0.1 by mistral-community

 ยป  All LLMs  ยป  mistral-community  ยป  Mixtral 8x22B V0.1   URL Share it on

  Autotrain compatible   De   En   Endpoints compatible   Es   Fr   It   Mixtral   Model-index   Moe   Region:us   Safetensors   Sharded   Tensorflow

Mixtral 8x22B V0.1 Benchmarks

Mixtral 8x22B V0.1 (mistral-community/Mixtral-8x22B-v0.1)

Mixtral 8x22B V0.1 Parameters and Internals

Model Type 
text-generation
Additional Notes 
Mixtral-8x22B is a pretrained generative Sparse Mixture of Experts LLM.
LLM NameMixtral 8x22B V0.1
Repository ๐Ÿค—https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1 
Model Size140.6b
Required VRAM212 GB
Updated2025-02-05
Maintainermistral-community
Model Typemixtral
Model Files  5.0 GB: 1-of-59   4.8 GB: 2-of-59   4.8 GB: 3-of-59   4.8 GB: 4-of-59   4.8 GB: 5-of-59   4.8 GB: 6-of-59   4.8 GB: 7-of-59   4.8 GB: 8-of-59   4.8 GB: 9-of-59   4.8 GB: 10-of-59   4.8 GB: 11-of-59   4.8 GB: 12-of-59   4.8 GB: 13-of-59   4.8 GB: 14-of-59   4.8 GB: 15-of-59   4.8 GB: 16-of-59   4.8 GB: 17-of-59   4.8 GB: 18-of-59   4.8 GB: 19-of-59   4.8 GB: 20-of-59   4.8 GB: 21-of-59   4.8 GB: 22-of-59   4.8 GB: 23-of-59   4.9 GB: 24-of-59   5.0 GB: 25-of-59   5.0 GB: 26-of-59   4.9 GB: 27-of-59   4.8 GB: 28-of-59   4.8 GB: 29-of-59   4.8 GB: 30-of-59   4.8 GB: 31-of-59   4.8 GB: 32-of-59   4.8 GB: 33-of-59   4.8 GB: 34-of-59   4.8 GB: 35-of-59   4.8 GB: 36-of-59   4.8 GB: 37-of-59   4.8 GB: 38-of-59   4.8 GB: 39-of-59   4.8 GB: 40-of-59   4.8 GB: 41-of-59   4.8 GB: 42-of-59   4.8 GB: 43-of-59   4.8 GB: 44-of-59
Supported Languagesfr it de es en
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length65536
Model Max Length65536
Transformers Version4.40.0.dev0
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Mixtral 8x22B V0.1

Model
Likes
Downloads
VRAM
...2 Mixtral 8x22b 6.0bpw H8 EXL216105 GB
...2 Mixtral 8x22b 8.0bpw H8 EXL227125 GB
...ephyr Orpo 141B A35b V0.1 GGUF02774 GB
WizardLM2 2bit01554 GB
...8x22b Instruct Oh EXL2 2.25bpw1640 GB

Best Alternatives to Mixtral 8x22B V0.1

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x22B Instruct V0.164K / 221.4 GB1290738708
Zephyr Orpo 141B A35b V0.164K / 207.2 GB623264
Mixtral 8x22B V0.164K / 221.6 GB1150131209
WizardLM 2 8x22B64K / 216.8 GB7244396
Mixtral 8x22B V0.364K / 221.4 GB603
XLAM 8x22b R64K / 211.8 GB258043
...ixtral 8x22B Instruct V0.1 FP864K / 140.9 GB9712
Dolphin 2.9.2 Mixtral 8x22b64K / 207.2 GB17838
...igHuggyD Grey WizardLM 2 8x22B64K / 216.6 GB104
...x22B Instruct V0.1 FP8 Dynamic64K / 140.9 GB220
Note: green Score (e.g. "73.2") means that the model is better than mistral-community/Mixtral-8x22B-v0.1.

Rank the Mixtral 8x22B V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227