Qwen2.5 32B Instruct CFT by TIGER-Lab

 ยป  All LLMs  ยป  TIGER-Lab  ยป  Qwen2.5 32B Instruct CFT   URL Share it on

  Arxiv:2501.17703   Autotrain compatible Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-32b-in...   Cft   Conversational Dataset:tiger-lab/webinstruct-...   En   Endpoints compatible   Instruct   Math   Qwen2   Reasoning   Region:us   Safetensors   Sharded   Tensorflow

Qwen2.5 32B Instruct CFT Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 32B Instruct CFT (TIGER-Lab/Qwen2.5-32B-Instruct-CFT)

Qwen2.5 32B Instruct CFT Parameters and Internals

LLM NameQwen2.5 32B Instruct CFT
Repository ๐Ÿค—https://huggingface.co/TIGER-Lab/Qwen2.5-32B-Instruct-CFT 
Base Model(s)  Qwen/Qwen2.5-32B-Instruct   Qwen/Qwen2.5-32B-Instruct
Model Size32b
Required VRAM65.8 GB
Updated2025-02-09
MaintainerTIGER-Lab
Model Typeqwen2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-14   4.9 GB: 2-of-14   4.9 GB: 3-of-14   4.9 GB: 4-of-14   4.9 GB: 5-of-14   4.9 GB: 6-of-14   4.9 GB: 7-of-14   4.9 GB: 8-of-14   4.9 GB: 9-of-14   4.9 GB: 10-of-14   4.9 GB: 11-of-14   4.9 GB: 12-of-14   4.9 GB: 13-of-14   2.1 GB: 14-of-14
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.46.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen2.5 32B Instruct CFT

Best Alternatives
Context / RAM
Downloads
Likes
...y Qwen2.5coder 32B V24.1q 200K195K / 65.8 GB62
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB3269
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB131
Franqwenstein 35B128K / 69.8 GB2088
EVA Qwen2.5 32B V0.2128K / 65.8 GB329848
...1 Qwen2.5 Instruct 32B Preview128K / 65.8 GB2027
QwQenSeek Coder128K / 65.7 GB1944
Qwenstein2.5 32B Instruct128K / 65.5 GB1212
EVA Qwen2.5 32B V0.0128K / 65.8 GB109126
EVA Qwen2.5 32B V0.1128K / 65.8 GB104614
Note: green Score (e.g. "73.2") means that the model is better than TIGER-Lab/Qwen2.5-32B-Instruct-CFT.

Rank the Qwen2.5 32B Instruct CFT Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42884 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227