Mixtral 8x22B Instruct V0.1 by MaziyarPanahi

 ยป  All LLMs  ยป  MaziyarPanahi  ยป  Mixtral 8x22B Instruct V0.1   URL Share it on

  Autotrain compatible   Conversational   De   En   Endpoints compatible   Es   Fr   Instruct   It   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Mixtral 8x22B Instruct V0.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mixtral 8x22B Instruct V0.1 (MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1)

Mixtral 8x22B Instruct V0.1 Parameters and Internals

Model Type 
instruct-fine-tuned, causal language model
Additional Notes 
The model supports function calling capabilities.
Supported Languages 
en (fluent), es (fluent), it (fluent), de (fluent), fr (fluent)
Input Output 
Input Format:
chat message format
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Use the recommended tokenizer for best performance
LLM NameMixtral 8x22B Instruct V0.1
Repository ๐Ÿค—https://huggingface.co/MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1 
Model Size140.6b
Required VRAM221.4 GB
Updated2024-12-21
MaintainerMaziyarPanahi
Model Typemixtral
Instruction-BasedYes
Model Files  4.8 GB: 1-of-59   4.8 GB: 2-of-59   4.8 GB: 3-of-59   4.8 GB: 4-of-59   4.8 GB: 5-of-59   4.8 GB: 6-of-59   4.8 GB: 7-of-59   4.8 GB: 8-of-59   4.8 GB: 9-of-59   4.8 GB: 10-of-59   4.8 GB: 11-of-59   4.8 GB: 12-of-59   4.8 GB: 13-of-59   4.8 GB: 14-of-59   4.8 GB: 15-of-59   4.8 GB: 16-of-59   4.8 GB: 17-of-59   4.8 GB: 18-of-59   4.8 GB: 19-of-59   4.8 GB: 20-of-59   4.8 GB: 21-of-59   4.8 GB: 22-of-59   4.9 GB: 23-of-59   5.0 GB: 24-of-59   5.0 GB: 25-of-59   4.9 GB: 26-of-59   4.8 GB: 27-of-59   4.8 GB: 28-of-59   4.8 GB: 29-of-59   4.8 GB: 30-of-59   4.8 GB: 31-of-59   4.8 GB: 32-of-59   4.8 GB: 33-of-59   4.8 GB: 34-of-59   4.8 GB: 35-of-59   4.8 GB: 36-of-59   4.8 GB: 37-of-59   4.8 GB: 38-of-59   4.8 GB: 39-of-59   4.8 GB: 40-of-59   4.8 GB: 41-of-59   4.8 GB: 42-of-59   4.8 GB: 43-of-59   4.8 GB: 44-of-59   4.8 GB: 45-of-59   4.8 GB: 46-of-59
Supported Languagesen es it de fr
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length65536
Model Max Length65536
Transformers Version4.38.0
Vocabulary Size32768
Torch Data Typebfloat16

Best Alternatives to Mixtral 8x22B Instruct V0.1

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x22B Instruct V0.164K / 221.4 GB2747604696
...ixtral 8x22B Instruct V0.1 FP864K / 140.9 GB5702
...igHuggyD Grey WizardLM 2 8x22B64K / 216.6 GB194
WizardLM 2 8x22B Beige64K / 221.4 GB243
...x22B Instruct V0.1 FP8 Dynamic64K / 140.9 GB220
...ral 8x22B Instruct V0.1 FP8 V264K / 140.9 GB170
...ral 8x22B Instruct V0.1 FP8 V164K / 140.9 GB100
Mixtral 8x22b Instruct Oh64K / 221.6 GB4329
Mixtral 8x22B GO Instruct V164K / 280.8 GB192
Goku 8x22B V0.264K / 211.2 GB286
Note: green Score (e.g. "73.2") means that the model is better than MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1.

Rank the Mixtral 8x22B Instruct V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217