Sensualize Mixtral GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Sensualize Mixtral GGUF   URL Share it on

Base model:quantized:sao10k/se... Base model:sao10k/sensualize-m... Dataset:nobodyexistsontheinter...   Gguf   Mixtral   Quantized   Region:us

Sensualize Mixtral GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Sensualize Mixtral GGUF (TheBloke/Sensualize-Mixtral-GGUF)

Sensualize Mixtral GGUF Parameters and Internals

Model Type 
mixtral
Use Cases 
Primary Use Cases:
Roleplay-based models, specifically ERP
Limitations:
Finicky with settings
Additional Notes 
Trained using 80M tokens over 1 epoch, with ZLoss and Megablocks-based fork of transformers. Experimental and finicky model, recommended to use Universal-Light or Universal-Creative settings in SillyTavern for improved performance.
Training Details 
Data Sources:
NobodyExistsOnTheInternet/full120k, My own NSFW Instruct & De-Alignment Data
Data Volume:
80M tokens
Methodology:
Trained on ZLoss and Megablocks-based fork of transformers, using Alpaca format
Training Time:
12 hours on 2xA100s
Hardware Used:
2xA100s
Input Output 
Input Format:
### Instruction: {system_message} ### Input: {prompt} ### Response:
LLM NameSensualize Mixtral GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/Sensualize-Mixtral-GGUF 
Model NameSensualize Mixtral
Model CreatorSaofiq
Base Model(s)  Sensualize Mixtral Bf16   Sao10K/Sensualize-Mixtral-bf16
Required VRAM15.6 GB
Updated2025-02-05
MaintainerTheBloke
Model Typemixtral
Model Files  15.6 GB   20.4 GB   26.4 GB   26.4 GB   32.2 GB   32.2 GB   38.4 GB   49.6 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensecc-by-nc-4.0

Best Alternatives to Sensualize Mixtral GGUF

Best Alternatives
Context / RAM
Downloads
Likes
ComicBot V.2 Gguf32K / 5 GB230
Phi 2 GGUF0K / 1.2 GB2200309195
Marco O1 GGUF0K / 3 GB50112
...ixtral 8x7B Instruct V0.1 GGUF0K / 15.6 GB37350611
Mixtral 8x7B V0.1 GGUF0K / 15.6 GB4950423
Dolphin 2.7 Mixtral 8x7b GGUF0K / 15.6 GB8747138
Dolphin 2.5 Mixtral 8x7b GGUF0K / 15.6 GB6026300
GOAT Llama3.1 V0.10K / 0.2 GB103
Phi 3 Mini 4K Instruct GGUF0K / 1.4 GB93416
Futfut By Zephyr7b Gguf0K / 5.1 GB1912
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Sensualize-Mixtral-GGUF.

Rank the Sensualize Mixtral GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227