LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit HQQ by mobiuslabsgmbh

What open-source LLMs or SLMs are you in search of? 18870 in total.

  4bit   Autotrain compatible   Conversational   Instruct   License:apache-2.0   Mixtral   Moe   Quantized   Region:us

Rank the Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit HQQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit HQQ (mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-HQQ)

Best Alternatives to Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit HQQ

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...es Mixtral 8x7B 2.4bpw H6 EXL263.732K / 14.3 GB22
...es Mixtral 8x7B 3.0bpw H6 EXL263.732K / 17.8 GB41
Notux 8x7b V1.3.5bpw EXL263.732K / 20.7 GB62
Notux 8x7b V1.3.5bpw H6 EXL263.732K / 20.7 GB31
...es Mixtral 8x7B 6.0bpw H6 EXL263.732K / 35.3 GB21
...8x7B Instruct V0.1 Hf 4bit Mlx32K / 10.4 GB891
Mixtral 8x7B Instruct 2bit32K / 12.2 GB373
...ruct 8x7b Zloss Bpw225 H6 EXL232K / 13.5 GB100
... Mixtral 8x7b 2.4bpw H6 EXL2 232K / 14.3 GB22
....6 Mixtral 8x7b 2.4bpw H6 EXL232K / 14.3 GB22
Note: green Score (e.g. "73.2") means that the model is better than mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-HQQ.

Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit HQQ Parameters and Internals

LLM NameMixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit HQQ
RepositoryOpen on ๐Ÿค— 
Required VRAM18.2 GB
Updated2024-02-29
Maintainermobiuslabsgmbh
Model Typemixtral
Instruction-BasedYes
Model Files  18.2 GB
Quantization Type4bit
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.1
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003