Mixtral 8x22B V0.1 Bnb 4bit Smashed by PrunaAI

 ยป  All LLMs  ยป  PrunaAI  ยป  Mixtral 8x22B V0.1 Bnb 4bit Smashed   URL Share it on

  4-bit   4bit   Autotrain compatible   Bitsandbytes   Endpoints compatible   Mixtral   Moe   Pruna-ai   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Mixtral 8x22B V0.1 Bnb 4bit Smashed Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mixtral 8x22B V0.1 Bnb 4bit Smashed (PrunaAI/Mixtral-8x22B-v0.1-bnb-4bit-smashed)

Mixtral 8x22B V0.1 Bnb 4bit Smashed Parameters and Internals

Model Type 
Causal Language Model
Use Cases 
Areas:
Research, Commercial Applications
Additional Notes 
PrunaAI intends to make AI models cheaper, smaller, faster, and greener.
Training Details 
Data Sources:
WikiText
Methodology:
Compression with llm-int8
Hardware Used:
NVIDIA A100-PCIE-40GB
Input Output 
Performance Tips:
Efficiency gains might differ across hardware and settings; it is recommended to test in specific use-case conditions.
Release Notes 
Version:
1.0
Notes:
The smashed model uses safetensors format and includes efficiency improvements.
LLM NameMixtral 8x22B V0.1 Bnb 4bit Smashed
Repository ๐Ÿค—https://huggingface.co/PrunaAI/Mixtral-8x22B-v0.1-bnb-4bit-smashed 
Model Size72.7b
Required VRAM80.2 GB
Updated2025-01-23
MaintainerPrunaAI
Model Typemixtral
Model Files  5.0 GB: 1-of-17   5.0 GB: 2-of-17   5.0 GB: 3-of-17   5.0 GB: 4-of-17   5.0 GB: 5-of-17   5.0 GB: 6-of-17   5.0 GB: 7-of-17   5.0 GB: 8-of-17   5.0 GB: 9-of-17   5.0 GB: 10-of-17   5.0 GB: 11-of-17   5.0 GB: 12-of-17   5.0 GB: 13-of-17   5.0 GB: 14-of-17   5.0 GB: 15-of-17   4.8 GB: 16-of-17   0.4 GB: 17-of-17
Quantization Type4bit
Model ArchitectureMixtralForCausalLM
Context Length65536
Model Max Length65536
Transformers Version4.40.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Mixtral 8x22B V0.1 Bnb 4bit Smashed

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x22B V0.1 4bit64K / 73.6 GB34654
...xtral 8x22B Instruct V0.1 4bit64K / 80.2 GB32611
...xtral 8x22B Instruct V0.1 4bit64K / 80.2 GB70
Mixtral 8x22B V0.1 4bit64K / 73.6 GB72
Note: green Score (e.g. "73.2") means that the model is better than PrunaAI/Mixtral-8x22B-v0.1-bnb-4bit-smashed.

Rank the Mixtral 8x22B V0.1 Bnb 4bit Smashed Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41774 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227