Qwen2.5 32B AGI by AiCloser

 ยป  All LLMs  ยป  AiCloser  ยป  Qwen2.5 32B AGI   URL Share it on

  Autotrain compatible Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-32b-in...   Conversational Dataset:anthracite-org/kalo-op... Dataset:orion-zhen/dpo-toxic-z... Dataset:unalignment/toxic-dpo-...   En   Endpoints compatible   Instruct   Qwen2   Region:us   Safetensors   Sharded   Tensorflow   Zh

Qwen2.5 32B AGI Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 32B AGI (AiCloser/Qwen2.5-32B-AGI)

Qwen2.5 32B AGI Parameters and Internals

Additional Notes 
AGI means Aspirational Grand Illusion. The model is designed to address Hypercensuritis, suggesting an emphasis on reducing excessive censorship.
Supported Languages 
zh (Chinese), en (English)
LLM NameQwen2.5 32B AGI
Repository ๐Ÿค—https://huggingface.co/AiCloser/Qwen2.5-32B-AGI 
Base Model(s)  Qwen/Qwen2.5-32B-Instruct   Qwen/Qwen2.5-32B-Instruct
Model Size32b
Required VRAM44.6 GB
Updated2024-12-21
MaintainerAiCloser
Model Typeqwen2
Instruction-BasedYes
Model Files  1.6 GB: 1-of-66   1.0 GB: 2-of-66   1.0 GB: 3-of-66   1.0 GB: 4-of-66   1.0 GB: 5-of-66   1.0 GB: 6-of-66   1.0 GB: 7-of-66   1.0 GB: 8-of-66   1.0 GB: 9-of-66   1.0 GB: 10-of-66   1.0 GB: 11-of-66   1.0 GB: 12-of-66   1.0 GB: 13-of-66   1.0 GB: 14-of-66   1.0 GB: 15-of-66   1.0 GB: 16-of-66   1.0 GB: 17-of-66   1.0 GB: 18-of-66   1.0 GB: 19-of-66   1.0 GB: 20-of-66   1.0 GB: 21-of-66   1.0 GB: 22-of-66   1.0 GB: 23-of-66   1.0 GB: 24-of-66   1.0 GB: 25-of-66   1.0 GB: 26-of-66   1.0 GB: 27-of-66   1.0 GB: 28-of-66   1.0 GB: 29-of-66   1.0 GB: 30-of-66   1.0 GB: 31-of-66   1.0 GB: 32-of-66   1.0 GB: 33-of-66   1.0 GB: 34-of-66   1.0 GB: 35-of-66   1.0 GB: 36-of-66   1.0 GB: 37-of-66   1.0 GB: 38-of-66   1.0 GB: 39-of-66   1.0 GB: 40-of-66   1.0 GB: 41-of-66   1.0 GB: 42-of-66   1.0 GB: 43-of-66   1.0 GB: 44-of-66
Supported Languageszh en
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.44.2
Vocabulary Size152064
Torch Data Typebfloat16

Best Alternatives to Qwen2.5 32B AGI

Best Alternatives
Context / RAM
Downloads
Likes
...y Qwen2.5coder 32B V24.1q 200K195K / 65.8 GB301
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB451
Franqwenstein 35B128K / 69.8 GB1077
EVA Qwen2.5 32B V0.2128K / 65.8 GB274043
EVA Qwen2.5 32B V0.0128K / 65.8 GB150423
EVA Qwen2.5 32B V0.1128K / 65.8 GB144114
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB652
Rombos Coder V2.5 Qwen 32B128K / 65.8 GB2456
EVA Instruct 32B V2128K / 65.8 GB1883
EVA Instruct QwQ 32B Preview128K / 65.8 GB743
Note: green Score (e.g. "73.2") means that the model is better than AiCloser/Qwen2.5-32B-AGI.

Rank the Qwen2.5 32B AGI Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217