QwQ 32B Preview Abliterated by huihui-ai

 ยป  All LLMs  ยป  huihui-ai  ยป  QwQ 32B Preview Abliterated   URL Share it on

  Abliterated   Autotrain compatible Base model:finetune:qwen/qwq-3... Base model:qwen/qwq-32b-previe...   Chat   Conversational   En   Endpoints compatible   Qwen2   Region:us   Safetensors   Sharded   Tensorflow   Uncensored

QwQ 32B Preview Abliterated Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
QwQ 32B Preview Abliterated (huihui-ai/QwQ-32B-Preview-abliterated)

QwQ 32B Preview Abliterated Parameters and Internals

LLM NameQwQ 32B Preview Abliterated
Repository ๐Ÿค—https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated 
Base Model(s)  QwQ 32B Preview   Qwen/QwQ-32B-Preview
Model Size32b
Required VRAM65.8 GB
Updated2024-12-22
Maintainerhuihui-ai
Model Typeqwen2
Model Files  4.9 GB: 1-of-14   4.9 GB: 2-of-14   4.9 GB: 3-of-14   4.9 GB: 4-of-14   4.9 GB: 5-of-14   4.9 GB: 6-of-14   4.9 GB: 7-of-14   4.9 GB: 8-of-14   4.9 GB: 9-of-14   4.9 GB: 10-of-14   4.9 GB: 11-of-14   4.9 GB: 12-of-14   4.9 GB: 13-of-14   2.1 GB: 14-of-14
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.46.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the QwQ 32B Preview Abliterated

Model
Likes
Downloads
VRAM
...iew Abliterated 4.5bpw H8 EXL2312619 GB

Best Alternatives to QwQ 32B Preview Abliterated

Best Alternatives
Context / RAM
Downloads
Likes
QwQ 32B Coder Fusion 901032K / 65.8 GB3133
...Coder 32B Instruct Abliterated32K / 65.8 GB87718
QwQ 32B Preview Jackterated32K / 65.8 GB3814
...en2.5 32B Instruct Abliterated32K / 65.8 GB308310
QwQ 32B Coder Fusion 802032K / 65.8 GB1962
QwQ 32B Coder Fusion 703032K / 65.8 GB884
...y Qwen2.5coder 32B V24.1q 200K195K / 65.8 GB311
Qwen2.5 32B128K / 65.5 GB2120557
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB712
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB471
Note: green Score (e.g. "73.2") means that the model is better than huihui-ai/QwQ-32B-Preview-abliterated.

Rank the QwQ 32B Preview Abliterated Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217