Limarp Zloss Mixtral 8x7b Qlora by Doctor-Shotgun

 ยป  All LLMs  ยป  Doctor-Shotgun  ยป  Limarp Zloss Mixtral 8x7b Qlora   URL Share it on

  4-bit   Adapter Base model:adapter:mistralai/m... Base model:mistralai/mixtral-8...   Bitsandbytes   Dataset:lemonilia/limarp   En   Finetuned   Generated from trainer   Lora   Mixtral   Moe   Peft   Region:us   Safetensors

Limarp Zloss Mixtral 8x7b Qlora Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Limarp Zloss Mixtral 8x7b Qlora (Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)

Limarp Zloss Mixtral 8x7b Qlora Parameters and Internals

Use Cases 
Limitations:
Model will show biases similar to those observed in niche roleplaying forums., Biases exhibited by the base model.
Additional Notes 
The model can ramble or impersonate the user with very long messages.
Supported Languages 
en (unknown)
Training Details 
Data Sources:
lemonilia/LimaRP
Methodology:
Experimental limarp qlora trained at 10k ctx length.
Context Length:
10000
Model Architecture:
Based on a fork of transformers with ZLoss and Megablocks
Input Output 
Input Format:
Alpaca instruction format from LimaRP v3 for roleplaying scenarios.
Accepted Modalities:
text
Output Format:
Roleplaying dialogue format, with response length modifiers.
Performance Tips:
Omit length modifier for automatic length adjustment.
LLM NameLimarp Zloss Mixtral 8x7b Qlora
Repository ๐Ÿค—https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora 
Base Model(s)  mistralai/Mixtral-8x7B-v0.1   mistralai/Mixtral-8x7B-v0.1
Required VRAM1.9 GB
Updated2025-03-12
MaintainerDoctor-Shotgun
Model Files  1.9 GB
Supported Languagesen
Model ArchitectureAdapter
Licenseapache-2.0
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token</s>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesw2|k_proj|q_proj|o_proj|w1|v_proj|w3|gate
LoRA Alpha16
LoRA Dropout0.05
R Param32

Best Alternatives to Limarp Zloss Mixtral 8x7b Qlora

Best Alternatives
Context / RAM
Downloads
Likes
Phi 3 Mini 4K Instruct Sa V0.10K / 0 GB150
Samantha Omni Humanlike Lora0K / 0 GB673
...is Violet Toxic GRPO V0.4 Lora0K / 0.5 GB120
Reflection Model0K / 0.2 GB01
SpectraMind0K / 16.1 GB1203
L3 Templar R128 LoRA0K / 3.4 GB171
...mall Physics Finetuned Adapter0K / 0.1 GB91
SpectraMindQ0K / 0.2 GB81
L3.1 Spark R64 LoRA0K / 0.4 GB60
Mistral Small Fujin Qlora0K / 0.8 GB362
Note: green Score (e.g. "73.2") means that the model is better than Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora.

Rank the Limarp Zloss Mixtral 8x7b Qlora Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 44902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227