Mixtral 8x7B Instruct V0.1 LimaRP ZLoss AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Mixtral 8x7B Instruct V0.1 LimaRP ZLoss AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:doctor-shotgun/mixt... Base model:quantized:doctor-sh...   Conversational   Dataset:lemonilia/limarp   En   Instruct   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Mixtral 8x7B Instruct V0.1 LimaRP ZLoss AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mixtral 8x7B Instruct V0.1 LimaRP ZLoss AWQ (TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-AWQ)

Mixtral 8x7B Instruct V0.1 LimaRP ZLoss AWQ Parameters and Internals

Model Type 
mixtral
Use Cases 
Areas:
roleplaying
Applications:
roleplaying chat
Primary Use Cases:
roleplaying chat interactions
Limitations:
known biases from niche roleplaying forums
Considerations:
Not intended for factual information or advice.
Additional Notes 
The model shows similar biases to niche roleplaying forums. It may repeat or exhibit high confidence with repeated interactions.
Supported Languages 
en (proficient)
Input Output 
Input Format:
Instruction-Input-Response
Accepted Modalities:
text
LLM NameMixtral 8x7B Instruct V0.1 LimaRP ZLoss AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-AWQ 
Model NameMixtral 8X7B Instruct v0.1 LimaRP ZLoss
Model CreatorDoctor Shotgun
Base Model(s)  Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss   Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Model Size6.5b
Required VRAM24.7 GB
Updated2025-02-05
MaintainerTheBloke
Model Typemixtral
Instruction-BasedYes
Model Files  10.0 GB: 1-of-3   10.0 GB: 2-of-3   4.7 GB: 3-of-3
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Mixtral 8x7B Instruct V0.1 LimaRP ZLoss AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Dolphin 2.7 Mixtral 8x7b AWQ32K / 24.7 GB636421
Mixtral Instruct AWQ32K / 24.7 GB437643
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB50
...utLM Mixtral 8x7B Instruct AWQ32K / 24.7 GB14162
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB36457
...xtral Instruct AWQ Clone Dec2332K / 24.7 GB80
...ixtral Instruct 8x7b Zloss AWQ32K / 24.7 GB102
...0.1 LimaRP ZLoss DARE TIES AWQ32K / 24.7 GB43
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB20715
Dolphin 2.6 Mixtral 8x7b AWQ32K / 24.7 GB2013
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-AWQ.

Rank the Mixtral 8x7B Instruct V0.1 LimaRP ZLoss AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227