Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit Metaoffload HQQ by mobiuslabsgmbh

 ยป  All LLMs  ยป  mobiuslabsgmbh  ยป  Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit Metaoffload HQQ   URL Share it on

  4bit   Autotrain compatible   Conversational   Instruct   Mixtral   Moe   Quantized   Region:us

Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit Metaoffload HQQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit Metaoffload HQQ Parameters and Internals

Model Type 
text-generation
Additional Notes 
This model uses a group-size of 128 instead of 256 for the scale/zero parameters, slightly improving overall score with minimal VRAM increase.
Input Output 
Accepted Modalities:
text
LLM NameMixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit Metaoffload HQQ
Repository ๐Ÿค—https://huggingface.co/mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-metaoffload-HQQ 
Required VRAM18.3 GB
Updated2024-11-22
Maintainermobiuslabsgmbh
Model Typemixtral
Instruction-BasedYes
Model Files  18.3 GB
Quantization Type4bit
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16
Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit Metaoffload HQQ (mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-metaoffload-HQQ)

Best Alternatives to Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit Metaoffload HQQ

Best Alternatives
Context / RAM
Downloads
Likes
...M 2 8x22B Beige 3.0bpw H6 EXL264K / 53.2 GB100
...M 2 8x22B Beige 2.4bpw H6 EXL264K / 42.7 GB60
...M 2 8x22B Beige 4.0bpw H6 EXL264K / 70.8 GB50
...M 2 8x22B Beige 5.0bpw H6 EXL264K / 88.5 GB50
...B Instruct V0.1 8.0bpw H8 EXL264K / 120.2 GB31
...8x22b Instruct Oh EXL2 2.25bpw64K / 40.1 GB31
...eryTour V2 8x7B 4.5bpw H6 EXL232K / 26.5 GB132
...it MoE 2bitgs8 Metaoffload HQQ32K / 24.1 GB1020
... 4bit MoE 3bit Metaoffload HQQ32K / 22.4 GB613
...hin 2.7 Mixtral 8x7b 8bpw EXL232K / 46.8 GB32
Note: green Score (e.g. "73.2") means that the model is better than mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-metaoffload-HQQ.

Rank the Mixtral 8x7B Instruct V0.1 Hf Attn 4bit MoE 2bit Metaoffload HQQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38200 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110