SauerkrautLM Mixtral 8x7B Instruct by VAGOsolutions

 ยป  All LLMs  ยป  VAGOsolutions  ยป  SauerkrautLM Mixtral 8x7B Instruct   URL Share it on

  Augmentation   Autotrain compatible   Conversational Dataset:argilla/distilabel-mat...   De   Dpo   En   Endpoints compatible   Es   Finetuned   Fr   German   Instruct   It   Mistral   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

SauerkrautLM Mixtral 8x7B Instruct Benchmarks

SauerkrautLM Mixtral 8x7B Instruct Parameters and Internals

Model Type 
Mixture of Experts (MoE), text-generation
Use Cases 
Areas:
research, commercial applications
Applications:
language understanding and generation, multilingual processing
Primary Use Cases:
text generation, instruction following
Considerations:
Ensure checking for uncensored content and maintain usage to comply with applicable laws and ethics.
Additional Notes 
Model is aligned with new German SauerkrautLM-DPO dataset, maintaining high linguistic accuracy in multiple languages, especially German.
Supported Languages 
English (fluent), German (fluent), French (fluent), Italian (fluent), Spanish (fluent)
Training Details 
Data Sources:
argilla/distilabel-math-preference-dpo, HuggingFaceH4/ultrafeedback_binarized, SFT SauerkrautLM dataset
Methodology:
DPO for alignment and German data augmentation
Model Architecture:
Mixture of Experts (MoE)
Input Output 
Input Format:
Utilizes an instruct-based prompt format with initial and follow-up instructions.
Accepted Modalities:
text
Output Format:
Text outputs as per instructions provided.
Performance Tips:
Ensure that input prompts are clear and consistent to enhance response quality.
LLM NameSauerkrautLM Mixtral 8x7B Instruct
Repository ๐Ÿค—https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct 
Model Size46.7b
Required VRAM93.6 GB
Updated2024-11-22
MaintainerVAGOsolutions
Model Typemixtral
Instruction-BasedYes
Model Files  4.9 GB: 1-of-19   5.0 GB: 2-of-19   5.0 GB: 3-of-19   4.9 GB: 4-of-19   5.0 GB: 5-of-19   5.0 GB: 6-of-19   4.9 GB: 7-of-19   5.0 GB: 8-of-19   5.0 GB: 9-of-19   4.9 GB: 10-of-19   5.0 GB: 11-of-19   5.0 GB: 12-of-19   5.0 GB: 13-of-19   4.9 GB: 14-of-19   5.0 GB: 15-of-19   5.0 GB: 16-of-19   4.9 GB: 17-of-19   5.0 GB: 18-of-19   4.2 GB: 19-of-19
Supported Languagesen de fr it es
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16
SauerkrautLM Mixtral 8x7B Instruct (VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct)

Quantized Models of the SauerkrautLM Mixtral 8x7B Instruct

Model
Likes
Downloads
VRAM
...tLM Mixtral 8x7B Instruct GGUF920715 GB
...utLM Mixtral 8x7B Instruct AWQ211624 GB
...tLM Mixtral 8x7B Instruct GPTQ35123 GB

Best Alternatives to SauerkrautLM Mixtral 8x7B Instruct

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x7B Instruct V0.132K / 93.6 GB4873354206
Dolphin 2.5 Mixtral 8x7b32K / 93.6 GB475361210
Merge Mixtral Prometheus 8x7B32K / 91.9 GB242
Notux 8x7b V132K / 93.6 GB57165
BagelMIsteryTour V2 8x7B32K / 93.5 GB9216
Mixtral 8x7B Instruct V0.1 FP832K / 47.1 GB168173
Karakuri Lm 8x7b Instruct V0.132K / 93.6 GB111019
XLAM V0.1 R32K / 93.6 GB17852
Autotrain Xva0j Mixtral8x7b32K / 93.6 GB690
TeTO MS 8x7b32K / 93.7 GB214
Note: green Score (e.g. "73.2") means that the model is better than VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct.

Rank the SauerkrautLM Mixtral 8x7B Instruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38200 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110