CodeQwen1.5 7B by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  CodeQwen1.5 7B   URL Share it on

  Arxiv:2309.16609   Autotrain compatible   En   Endpoints compatible   Pretrained   Qwen2   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Qwen/CodeQwen1.5-7B 

CodeQwen1.5 7B Benchmarks

CodeQwen1.5 7B (Qwen/CodeQwen1.5-7B)

CodeQwen1.5 7B Parameters and Internals

Model Type 
text-generation, code generation
Use Cases 
Applications:
code infilling, code generation, text-to-SQL, bug fix
Considerations:
Be careful about stopping criteria.
Additional Notes 
Supporting 92 coding languages. Strong code generation capabilities and competitive performance across a series of benchmarks.
Training Details 
Data Volume:
3 trillion tokens
Methodology:
Group Query Attention (GQA) for efficient inference
Context Length:
64000
Model Architecture:
Transformer-based decoder-only language model
LLM NameCodeQwen1.5 7B
Repository ๐Ÿค—https://huggingface.co/Qwen/CodeQwen1.5-7B 
Model Size7b
Required VRAM14.6 GB
Updated2025-02-22
MaintainerQwen
Model Typeqwen2
Model Files  3.9 GB: 1-of-4   4.0 GB: 2-of-4   4.0 GB: 3-of-4   2.7 GB: 4-of-4
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length65536
Model Max Length65536
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<fim_pad>
Vocabulary Size92416
Torch Data Typebfloat16

Quantized Models of the CodeQwen1.5 7B

Model
Likes
Downloads
VRAM
CodeQwen1.5 7B AWQ21555 GB
CodeQwen1.5 7B EXL2 8.0bpw177 GB

Best Alternatives to CodeQwen1.5 7B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M986K / 15.4 GB289038236
Qwen2.5 7B MixStock V0.1986K / 15.2 GB6823
Qwen2.5 7B RRP 1M986K / 15.2 GB2944
Qwen2.5 7B CelestialHarmony 1M986K / 14.8 GB1535
Qwen 2.5 7B Exp Sce986K / 15.2 GB282
COCO 7B Instruct 1M986K / 15.2 GB1059
SJT 7B V1.1986K / 14.8 GB1521
Q2.5 Instruct 1M Harmony986K / 15.2 GB611
Impish QWEN 7B 1M986K / 15.2 GB701
Qwen 2.5 7B Deep Stock V5986K / 15.2 GB302
Note: green Score (e.g. "73.2") means that the model is better than Qwen/CodeQwen1.5-7B.

Rank the CodeQwen1.5 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227