CodeQwen1.5 7B AWQ by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  CodeQwen1.5 7B AWQ   URL Share it on

  Arxiv:2309.16609   4-bit   Autotrain compatible   Awq   Conversational   En   Endpoints compatible   Pretrained   Quantized   Qwen2   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Qwen/CodeQwen1.5-7B-AWQ 

CodeQwen1.5 7B AWQ Benchmarks

CodeQwen1.5 7B AWQ (Qwen/CodeQwen1.5-7B-AWQ)

CodeQwen1.5 7B AWQ Parameters and Internals

Model Type 
transformer-based, decoder-only, text-generation
Use Cases 
Areas:
research, code generation
Applications:
code infilling, code generation, bug fix, text-to-SQL
Limitations:
not advised for chat
Additional Notes 
This is the AWQ quantized model of the base model, instead of chat model!
Supported Languages 
en (-coding_languages)
Training Details 
Data Sources:
codes
Data Volume:
3 trillion tokens
Context Length:
64000
Model Architecture:
transformer-based decoder-only
Input Output 
Input Format:
code
Accepted Modalities:
text
Output Format:
code
Performance Tips:
Be careful about stopping criteria.
LLM NameCodeQwen1.5 7B AWQ
Repository ๐Ÿค—https://huggingface.co/Qwen/CodeQwen1.5-7B-AWQ 
Base Model(s)  Qwen/CodeQwen1.5-7B   Qwen/CodeQwen1.5-7B
Model Size7b
Required VRAM5.3 GB
Updated2025-02-22
MaintainerQwen
Model Typeqwen2
Model Files  4.0 GB: 1-of-2   1.3 GB: 2-of-2
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length65536
Model Max Length65536
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<fim_pad>
Vocabulary Size92416
Torch Data Typefloat16

Best Alternatives to CodeQwen1.5 7B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Dolphin 2.9.2 Qwen2 7B AWQ128K / 5.6 GB860
Samantha Qwen2 7B AWQ128K / 5.6 GB80
CodeQwen1.5 7B Chat AWQ64K / 5.3 GB13913
Qwen2.5 7B Instruct AWQ32K / 5.6 GB1619418
Qwen2.5 Coder 7B Instruct AWQ32K / 5.6 GB341911
Qwen2 7B Instruct AWQ32K / 5.6 GB116020
Qwen1.5 7B AWQ W4 G12832K / 5.9 GB780
Qwen1.5 7B Chat AWQ32K / 5.9 GB18012
Qwen2.5 7B Instruct 1M 4bit986K / 4.3 GB7306
...B Instruct 1M Unsloth Bnb 4bit986K / 7.5 GB3541
Note: green Score (e.g. "73.2") means that the model is better than Qwen/CodeQwen1.5-7B-AWQ.

Rank the CodeQwen1.5 7B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227