Mixtral 8x22B V0.1 by v2ray

 ยป  All LLMs  ยป  v2ray  ยป  Mixtral 8x22B V0.1   URL Share it on

  Autotrain compatible   De   En   Endpoints compatible   Es   Fr   It   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Mixtral 8x22B V0.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mixtral 8x22B V0.1 (v2ray/Mixtral-8x22B-v0.1)

Mixtral 8x22B V0.1 Parameters and Internals

Model Type 
generative, sparse mixture of experts
Additional Notes 
Mixtral-8x22B-v0.1 is a pretrained base model and does not have any moderation mechanisms.
Supported Languages 
fr (supported), it (supported), de (supported), es (supported), en (supported)
LLM NameMixtral 8x22B V0.1
Repository ๐Ÿค—https://huggingface.co/v2ray/Mixtral-8x22B-v0.1 
Model Size140.6b
Required VRAM212 GB
Updated2025-02-22
Maintainerv2ray
Model Typemixtral
Model Files  5.0 GB: 1-of-59   4.8 GB: 2-of-59   4.8 GB: 3-of-59   4.8 GB: 4-of-59   4.8 GB: 5-of-59   4.8 GB: 6-of-59   4.8 GB: 7-of-59   4.8 GB: 8-of-59   4.8 GB: 9-of-59   4.8 GB: 10-of-59   4.8 GB: 11-of-59   4.8 GB: 12-of-59   4.8 GB: 13-of-59   4.8 GB: 14-of-59   4.8 GB: 15-of-59   4.8 GB: 16-of-59   4.8 GB: 17-of-59   4.8 GB: 18-of-59   4.8 GB: 19-of-59   4.8 GB: 20-of-59   4.8 GB: 21-of-59   4.8 GB: 22-of-59   4.8 GB: 23-of-59   4.9 GB: 24-of-59   5.0 GB: 25-of-59   5.0 GB: 26-of-59   4.9 GB: 27-of-59   4.8 GB: 28-of-59   4.8 GB: 29-of-59   4.8 GB: 30-of-59   4.8 GB: 31-of-59   4.8 GB: 32-of-59   4.8 GB: 33-of-59   4.8 GB: 34-of-59   4.8 GB: 35-of-59   4.8 GB: 36-of-59   4.8 GB: 37-of-59   4.8 GB: 38-of-59   4.8 GB: 39-of-59   4.8 GB: 40-of-59   4.8 GB: 41-of-59   4.8 GB: 42-of-59   4.8 GB: 43-of-59   4.8 GB: 44-of-59
Supported Languagesfr it de es en
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length65536
Model Max Length65536
Transformers Version4.40.0.dev0
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Mixtral 8x22B V0.1

Model
Likes
Downloads
VRAM
Mixtral 8x22B V0.1 AWQ36474573 GB

Best Alternatives to Mixtral 8x22B V0.1

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x22B Instruct V0.164K / 221.4 GB153763713
Zephyr Orpo 141B A35b V0.164K / 207.2 GB563266
WizardLM 2 8x22B64K / 216.8 GB7110399
Mixtral 8x22B V0.164K / 221.6 GB5953212
Mixtral 8x22B V0.164K / 212 GB4522674
Mixtral 8x22B V0.364K / 221.4 GB463
XLAM 8x22b R64K / 211.8 GB32044
...ixtral 8x22B Instruct V0.1 FP864K / 140.9 GB9013
Dolphin 2.9.2 Mixtral 8x22b64K / 207.2 GB17538
WizardLM 2 8x22B Beige64K / 221.4 GB493
Note: green Score (e.g. "73.2") means that the model is better than v2ray/Mixtral-8x22B-v0.1.

Rank the Mixtral 8x22B V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227