Mixtral 8x22B V0.1 AWQ by mistral-community

 ยป  All LLMs  ยป  mistral-community  ยป  Mixtral 8x22B V0.1 AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:quantized:v2ray/mix... Base model:v2ray/mixtral-8x22b...   De   En   Endpoints compatible   Es   Fr   It   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Mixtral 8x22B V0.1 AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mixtral 8x22B V0.1 AWQ (mistral-community/Mixtral-8x22B-v0.1-AWQ)

Mixtral 8x22B V0.1 AWQ Parameters and Internals

Model Type 
text-generation, quantized
Additional Notes 
The model supports inference via quantized (AWQ) format. The quantization was performed by MaziyarPanahi to allow more efficient inference.
Supported Languages 
en (high), es (high), de (high), it (high), fr (high)
Training Details 
Context Length:
65000
Model Architecture:
176B MoE with ~40B active
Input Output 
Accepted Modalities:
text
LLM NameMixtral 8x22B V0.1 AWQ
Repository ๐Ÿค—https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1-AWQ 
Model NameMixtral-8x22B-v0.1-AWQ
Model Creatorv2ray
Base Model(s)  v2ray/Mixtral-8x22B-v0.1   v2ray/Mixtral-8x22B-v0.1
Model Size19.2b
Required VRAM73.7 GB
Updated2025-01-20
Maintainermistral-community
Model Typemixtral
Model Files  5.0 GB: 1-of-15   5.0 GB: 2-of-15   5.0 GB: 3-of-15   5.0 GB: 4-of-15   5.0 GB: 5-of-15   5.0 GB: 6-of-15   5.0 GB: 7-of-15   5.0 GB: 8-of-15   5.0 GB: 9-of-15   5.0 GB: 10-of-15   5.0 GB: 11-of-15   5.0 GB: 12-of-15   5.0 GB: 13-of-15   5.0 GB: 14-of-15   3.7 GB: 15-of-15
Supported Languagesen es de it fr
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Context Length65536
Model Max Length65536
Transformers Version4.38.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Mixtral 8x22B V0.1 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...olphin 2.9.2 Mixtral 8x22b AWQ64K / 73.7 GB49040
WizardLM 2 8x22B AWQ64K / 73.7 GB927912
...ixtral 8x22B Instruct V0.1 AWQ64K / 73.7 GB13810
Karasu Mixtral 8x22B V0.1 AWQ64K / 73.7 GB137
Zephyr Orpo 141B A35b V0.1 AWQ64K / 73.7 GB232
... 8x22B Instruct V0.1 GPTQ 4bit64K / 74.1 GB1801
MixTAO 19B Pass32K / 38.1 GB261
Multimerge 19B Pass32K / 38 GB100
Lorge 2x7B UAMM32K / 38.2 GB160
Mistralmath 15B Pass32K / 38.5 GB110
Note: green Score (e.g. "73.2") means that the model is better than mistral-community/Mixtral-8x22B-v0.1-AWQ.

Rank the Mixtral 8x22B V0.1 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41636 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227