Huihui Ai Abliterated Qwen2.5 32B Inst BaseMerge TIES by CombinHorizon

 ยป  All LLMs  ยป  CombinHorizon  ยป  Huihui Ai Abliterated Qwen2.5 32B Inst BaseMerge TIES   URL Share it on

  Merged Model   Arxiv:2306.01708   Arxiv:2407.10671   Autotrain compatible Base model:huihui-ai/qwen2.5-3...   Base model:qwen/qwen2.5-32b   Conversational   En   Endpoints compatible   Instruct   Model-index   Qwen2   Qwen2.5   Region:us   Safetensors   Sharded   Tensorflow   Ties

Huihui Ai Abliterated Qwen2.5 32B Inst BaseMerge TIES Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Huihui Ai Abliterated Qwen2.5 32B Inst BaseMerge TIES (CombinHorizon/huihui-ai-abliterated-Qwen2.5-32B-Inst-BaseMerge-TIES)

Huihui Ai Abliterated Qwen2.5 32B Inst BaseMerge TIES Parameters and Internals

LLM NameHuihui Ai Abliterated Qwen2.5 32B Inst BaseMerge TIES
Repository ๐Ÿค—https://huggingface.co/CombinHorizon/huihui-ai-abliterated-Qwen2.5-32B-Inst-BaseMerge-TIES 
Base Model(s)  Qwen/Qwen2.5-32B   huihui-ai/Qwen2.5-32B-Instruct-abliterated   Qwen/Qwen2.5-32B   huihui-ai/Qwen2.5-32B-Instruct-abliterated
Merged ModelYes
Model Size32b
Required VRAM65.8 GB
Updated2025-02-05
MaintainerCombinHorizon
Model Typeqwen2
Instruction-BasedYes
Model Files  5.0 GB: 1-of-14   5.0 GB: 2-of-14   4.9 GB: 3-of-14   4.9 GB: 4-of-14   4.9 GB: 5-of-14   4.9 GB: 6-of-14   4.9 GB: 7-of-14   4.9 GB: 8-of-14   4.9 GB: 9-of-14   4.9 GB: 10-of-14   4.9 GB: 11-of-14   4.9 GB: 12-of-14   4.9 GB: 13-of-14   1.9 GB: 14-of-14
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.46.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Huihui Ai Abliterated Qwen2.5 32B Inst BaseMerge TIES

Best Alternatives
Context / RAM
Downloads
Likes
...y Qwen2.5coder 32B V24.1q 200K195K / 65.8 GB122
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB3619
Franqwenstein 35B128K / 69.8 GB2578
EVA Qwen2.5 32B V0.2128K / 65.8 GB345048
...1 Qwen2.5 Instruct 32B Preview128K / 65.8 GB1747
QwQenSeek Coder128K / 65.7 GB584
EVA Qwen2.5 32B V0.0128K / 65.8 GB104826
Qwenstein2.5 32B Instruct128K / 65.5 GB942
EVA Qwen2.5 32B V0.1128K / 65.8 GB99514
Q2.5 32B Slush128K / 65.7 GB1727
Note: green Score (e.g. "73.2") means that the model is better than CombinHorizon/huihui-ai-abliterated-Qwen2.5-32B-Inst-BaseMerge-TIES.

Rank the Huihui Ai Abliterated Qwen2.5 32B Inst BaseMerge TIES Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42565 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227