Qwen2.5 Coder 7B Instruct AWQ by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen2.5 Coder 7B Instruct AWQ   URL Share it on

  Arxiv:2309.00071   Arxiv:2407.10671   Arxiv:2409.12186   4-bit   Autotrain compatible   Awq Base model:quantized:qwen/qwen... Base model:qwen/qwen2.5-coder-...   Chat   Code   Codegen   Codeqwen   Conversational   En   Endpoints compatible   Instruct   Quantized   Qwen   Qwen-coder   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Qwen2.5 Coder 7B Instruct AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Qwen2.5 Coder 7B Instruct AWQ Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
code generation
Applications:
Code reasoning, code fixing, real-world applications like Code Agents
Primary Use Cases:
Coding enhancements, mathematics and general competencies
Limitations:
Performance impact on shorter texts with static YaRN configuration
Considerations:
Advised `rope_scaling` configuration only for long context processing
Supported Languages 
en (English)
Training Details 
Data Sources:
source code, text-code grounding, Synthetic data
Data Volume:
5.5 trillion tokens
Methodology:
Pretraining & Post-training
Context Length:
131072
Model Architecture:
transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
Input Output 
Performance Tips:
To enable YaRN, add specific configurations to `config.json`
LLM NameQwen2.5 Coder 7B Instruct AWQ
Repository ๐Ÿค—https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct-AWQ 
Base Model(s)  Qwen/Qwen2.5-Coder-7B-Instruct   Qwen/Qwen2.5-Coder-7B-Instruct
Model Size7b
Required VRAM5.6 GB
Updated2024-12-03
MaintainerQwen
Model Typeqwen2
Instruction-BasedYes
Model Files  4.0 GB: 1-of-2   1.6 GB: 2-of-2
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Generates CodeYes
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.44.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typefloat16
Errorsreplace
Qwen2.5 Coder 7B Instruct AWQ (Qwen/Qwen2.5-Coder-7B-Instruct-AWQ)

Best Alternatives to Qwen2.5 Coder 7B Instruct AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...2.5 Coder 7B Instruct Bnb 4bit32K / 5.5 GB68394
FastApply 7B V1.032K / 15.2 GB42714
...en2.5.1 Coder 7B Instruct 8bit32K / 8.1 GB1662
...en2.5.1 Coder 7B Instruct 4bit32K / 4.3 GB1182
StockQwen 2.5 7B128K / 15.2 GB3042
Qwen2.5 Coder 7B Instruct32K / 15.2 GB164790343
Qwen2.5 Coder 7B Instruct32K / 15.3 GB13244
....5 Coder 7B Instruct GPTQ Int832K / 8.9 GB15722
Rombos Coder V2.5 Qwen 7B32K / 15.2 GB2653
Arch Function 7B32K / 15.2 GB4104
Note: green Score (e.g. "73.2") means that the model is better than Qwen/Qwen2.5-Coder-7B-Instruct-AWQ.

Rank the Qwen2.5 Coder 7B Instruct AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38770 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124