Qwen2.5 Coder 32B Instruct AWQ by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen2.5 Coder 32B Instruct AWQ   URL Share it on

  Arxiv:2309.00071   Arxiv:2407.10671   Arxiv:2409.12186   4-bit   Autotrain compatible   Awq Base model:quantized:qwen/qwen... Base model:qwen/qwen2.5-coder-...   Chat   Code   Codegen   Codeqwen   Conversational   En   Endpoints compatible   Instruct   Quantized   Qwen   Qwen-coder   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Qwen2.5 Coder 32B Instruct AWQ Benchmarks

Qwen2.5 Coder 32B Instruct AWQ (Qwen/Qwen2.5-Coder-32B-Instruct-AWQ)

Qwen2.5 Coder 32B Instruct AWQ Parameters and Internals

Model Type 
Causal Language Model
Use Cases 
Areas:
research, commercial applications
Applications:
Code Agents, coding capabilities, mathematics, general competencies
Primary Use Cases:
code generation, code reasoning, code fixing
Supported Languages 
en (High)
Training Details 
Data Sources:
source code, text-code grounding, Synthetic data
Data Volume:
5.5 trillion tokens
Methodology:
Pretraining & Post-training
Context Length:
131072
Model Architecture:
transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
Input Output 
Input Format:
Message-based prompt template with roles (system, user)
Accepted Modalities:
text
Performance Tips:
Add 'rope_scaling' configuration for handling long texts; optimize for long contexts with YaRN.
LLM NameQwen2.5 Coder 32B Instruct AWQ
Repository ๐Ÿค—https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct-AWQ 
Base Model(s)  Qwen/Qwen2.5-Coder-32B-Instruct   Qwen/Qwen2.5-Coder-32B-Instruct
Model Size32b
Required VRAM19.4 GB
Updated2024-12-08
MaintainerQwen
Model Typeqwen2
Instruction-BasedYes
Model Files  3.9 GB: 1-of-5   4.0 GB: 2-of-5   4.0 GB: 3-of-5   4.0 GB: 4-of-5   3.5 GB: 5-of-5
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Generates CodeYes
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.41.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Qwen2.5 Coder 32B Instruct AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...wen2.5 Coder 32B Instruct 3bit32K / 14.3 GB1083
...wen2.5 Coder 32B Instruct 4bit32K / 18.5 GB24886
...truct Gptqmodel 4bit Vortex V132K / 21.3 GB74711
...wen2.5 Coder 32B Instruct 8bit32K / 34.9 GB9117
Rombos Coder V2.5 Qwen 32B128K / 65.8 GB3086
Qwen2.5 Coder 32B Instruct32K / 65.8 GB2176791220
...5 Coder 32B Instruct GPTQ Int832K / 35.1 GB1088212
...5 Coder 32B Instruct GPTQ Int432K / 19.5 GB103708
...wen2.5 Coder 32B Instruct Bf1632K / 65.9 GB6549
Qwen2.5 Coder 32B Instruct32K / 65.8 GB6255
Note: green Score (e.g. "73.2") means that the model is better than Qwen/Qwen2.5-Coder-32B-Instruct-AWQ.

Rank the Qwen2.5 Coder 32B Instruct AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 39016 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124