Mixtral 8x7B Instruct V0.1 GGUF by second-state

 ยป  All LLMs  ยป  second-state  ยป  Mixtral 8x7B Instruct V0.1 GGUF   URL Share it on

  Autotrain compatible Base model:mistralai/mixtral-8...   De   En   Es   Fr   Gguf   Instruct   It   License:apache-2.0   Mixtral   Moe   Q2   Quantized   Region:us

Mixtral 8x7B Instruct V0.1 GGUF Benchmarks

Rank the Mixtral 8x7B Instruct V0.1 GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mixtral 8x7B Instruct V0.1 GGUF (second-state/Mixtral-8x7B-Instruct-v0.1-GGUF)

Best Alternatives to Mixtral 8x7B Instruct V0.1 GGUF

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...es Mixtral 8x7B 2.4bpw H6 EXL268.232K / 14.3 GB12
...es Mixtral 8x7B 3.0bpw H6 EXL268.232K / 17.8 GB11
Notux 8x7b V1.3.5bpw EXL268.232K / 20.7 GB12
Notux 8x7b V1.3.5bpw H6 EXL268.232K / 20.7 GB11
...es Mixtral 8x7B 6.0bpw H6 EXL268.232K / 35.3 GB11
Dolphin 2.7 Mixtral 8x7b GGUF32K / 15.6 GB4785
...8x22b Instruct Oh EXL2 2.25bpw64K / 40.1 GB41
...M 2 8x22B Beige 2.4bpw H6 EXL264K / 42.7 GB60
...M 2 8x22B Beige 3.0bpw H6 EXL264K / 53.2 GB150
...M 2 8x22B Beige 4.0bpw H6 EXL264K / 70.8 GB860

Mixtral 8x7B Instruct V0.1 GGUF Parameters and Internals

LLM NameMixtral 8x7B Instruct V0.1 GGUF
RepositoryOpen on ๐Ÿค— 
Model NameMixtral 8X7B Instruct v0.1
Model CreatorMistral AI_
Base Model(s)  Mixtral 8x7B Instruct V0.1   mistralai/Mixtral-8x7B-Instruct-v0.1
Required VRAM17.3 GB
Updated2024-07-04
Maintainersecond-state
Model Typemixtral
Instruction-BasedYes
Model Files  17.3 GB   24.2 GB   22.5 GB   20.4 GB   26.4 GB   28.4 GB   26.7 GB   32.2 GB   33.2 GB   32.2 GB   38.4 GB   49.6 GB
Supported Languagesfr it de es en
GGUF QuantizationYes
Quantization Typegguf|q2|q4_k|q5_k
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0.dev0
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 33742 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801