Qwen2.5 Coder 7B by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen2.5 Coder 7B   URL Share it on

  Arxiv:2309.00071   Arxiv:2407.10671   Arxiv:2409.12186   Autotrain compatible Base model:finetune:qwen/qwen2...   Base model:qwen/qwen2.5-7b   Code   Codegen   Codeqwen   Conversational   En   Endpoints compatible   Qwen   Qwen-coder   Qwen2   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Qwen/Qwen2.5-Coder-7B 

Qwen2.5 Coder 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 Coder 7B (Qwen/Qwen2.5-Coder-7B)

Qwen2.5 Coder 7B Parameters and Internals

Model Type 
text-generation, code
Use Cases 
Areas:
research, commercial applications
Applications:
code generation, code reasoning, code fixing, code agents
Primary Use Cases:
coding capabilities, mathematics, general competencies
Limitations:
Not recommended for conversations
Considerations:
Post-training or specific task tuning is recommended for certain applications
Additional Notes 
The model's architecture includes RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
Supported Languages 
en (English)
Training Details 
Data Sources:
source code, text-code grounding, synthetic data
Data Volume:
5.5 trillion tokens
Methodology:
transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias architecture
Context Length:
131072
Model Architecture:
transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
Input Output 
Input Format:
Supports up to 128K tokens in context length.
Accepted Modalities:
text
Output Format:
Text
Performance Tips:
Use 'rope_scaling' for handling long contexts optimally.
LLM NameQwen2.5 Coder 7B
Repository ๐Ÿค—https://huggingface.co/Qwen/Qwen2.5-Coder-7B 
Base Model(s)  Qwen/Qwen2.5-7B   Qwen/Qwen2.5-7B
Model Size7b
Required VRAM15.2 GB
Updated2025-02-05
MaintainerQwen
Model Typeqwen2
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.3 GB: 3-of-4   1.1 GB: 4-of-4
Supported Languagesen
Generates CodeYes
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.45.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the Qwen2.5 Coder 7B

Model
Likes
Downloads
VRAM
Qwen2.5 Coder 7B Instruct 4bit21039824 GB
Qwen2.5 Coder 7B Bnb 4bit659775 GB

Best Alternatives to Qwen2.5 Coder 7B

Best Alternatives
Context / RAM
Downloads
Likes
SakalFusion 7B Beta986K / 15.2 GB160
...R1 Distill Qwen MFANN Slerp 7B128K / 15.2 GB7730
Qwen2.5 7B CySecButler V0.1128K / 15.2 GB153
CoT 2.5128K / 15.2 GB390
Mergekit Ties Uqhfast128K / 15.2 GB260
CoT 2.5128K / 15.2 GB260
Mergekit Ties Uqhfast128K / 15.2 GB130
StockQwen 2.5 7B128K / 15.2 GB133
Qwen2.5 Coder 7B Instruct32K / 15.2 GB136245401
...erated MFANN Slerp Unretrained32K / 15.2 GB14491
Note: green Score (e.g. "73.2") means that the model is better than Qwen/Qwen2.5-Coder-7B.

Rank the Qwen2.5 Coder 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227