SauerkrautLM Mixtral 8x7B Instruct GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  SauerkrautLM Mixtral 8x7B Instruct GPTQ   URL Share it on

  4-bit   Augmentation   Autotrain compatible Base model:quantized:vagosolut... Base model:vagosolutions/sauer...   Conversational Dataset:argilla/distilabel-mat...   De   Dpo   En   Es   Finetuned   Fr   German   Gptq   Instruct   It   Mistral   Mixtral   Moe   Quantized   Region:us   Safetensors

SauerkrautLM Mixtral 8x7B Instruct GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

SauerkrautLM Mixtral 8x7B Instruct GPTQ Parameters and Internals

Model Type 
mixture of experts, mixtral
Use Cases 
Areas:
commercial applications, customized LLMs for business
Limitations:
Possibility of inappropriate content slipping through, cannot guarantee consistently appropriate behavior
Additional Notes 
Training involved augmenting German data to improve grammatical and syntactical correctness.
Supported Languages 
English (fluent), German (fluent), French (fluent), Italian (fluent), Spanish (fluent)
Training Details 
Data Sources:
argilla/distilabel-math-preference-dpo, translated Parts of the HuggingFaceH4/ultrafeedback_binarized, Sauerkraut-7b-HerO, German SauerkrautLM-DPO dataset
Methodology:
DPO alignment
Model Architecture:
Mixture of Experts (MoE)
Safety Evaluation 
Ethical Considerations:
Despite data cleansing efforts, the possibility of uncensored content slipping through cannot be ruled out.
Responsible Ai Considerations 
Accountability:
VAGO solutions
Mitigation Strategies:
Data cleansing to avoid uncensored content
Input Output 
Input Format:
[INST] {prompt} [/INST]
Accepted Modalities:
text
Output Format:
Model's textual output
LLM NameSauerkrautLM Mixtral 8x7B Instruct GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GPTQ 
Model NameSauerkrautLM Mixtral 8X7B Instruct
Model CreatorVAGO solutions
Base Model(s)  ...rkrautLM Mixtral 8x7B Instruct   VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
Model Size6.1b
Required VRAM23.8 GB
Updated2024-11-22
MaintainerTheBloke
Model Typemixtral
Instruction-BasedYes
Model Files  23.8 GB
Supported Languagesen de fr it es
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16
SauerkrautLM Mixtral 8x7B Instruct GPTQ (TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GPTQ)

Best Alternatives to SauerkrautLM Mixtral 8x7B Instruct GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...ixtral 8x7B Instruct V0.1 GPTQ32K / 23.8 GB62981135
Dolphin 2.7 Mixtral 8x7b GPTQ32K / 23.8 GB5393018
Dolphin 2.5 Mixtral 8x7b GPTQ32K / 23.8 GB214105
....1 LimaRP ZLoss DARE TIES GPTQ32K / 23.8 GB126
...xtral Instruct 8x7b Zloss GPTQ32K / 23.8 GB152
...nstruct V0.1 LimaRP ZLoss GPTQ32K / 23.8 GB153
... Mixtral 8x7b Instruct V3 GPTQ32K / 23.8 GB252
Dolphin 2.6 Mixtral 8x7b GPTQ32K / 23.8 GB216
...ixtral 8x7B V0.1 Dolly15K GPTQ32K / 23.8 GB72
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GPTQ.

Rank the SauerkrautLM Mixtral 8x7B Instruct GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38200 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110