Qwen2.5 Coder 1.5B by unsloth

 ยป  All LLMs  ยป  unsloth  ยป  Qwen2.5 Coder 1.5B   URL Share it on

  Arxiv:2309.00071   Arxiv:2407.10671   Arxiv:2409.12186   Autotrain compatible Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-coder-...   Codegen   En   Endpoints compatible   Qwen2   Region:us   Safetensors   Unsloth

Qwen2.5 Coder 1.5B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 Coder 1.5B (unsloth/Qwen2.5-Coder-1.5B)

Qwen2.5 Coder 1.5B Parameters and Internals

Model Type 
Causal Language Models
Use Cases 
Areas:
real-world applications such as Code Agents
Primary Use Cases:
code generation, code reasoning, code fixing
Considerations:
Long-context processing and not recommended for conversational use out-of-the-box.
Additional Notes 
Notebooks available to finetune and deploy Qwen2.5.
Supported Languages 
en (primary)
Training Details 
Data Sources:
source code, text-code grounding, Synthetic data
Data Volume:
5.5 trillion tokens
Context Length:
131072
Model Architecture:
transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
Input Output 
Input Format:
Causal input
Accepted Modalities:
text
Output Format:
Generated code
Performance Tips:
Use YaRN for long context processing.
LLM NameQwen2.5 Coder 1.5B
Repository ๐Ÿค—https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B 
Base Model(s)  Qwen/Qwen2.5-Coder-1.5B   Qwen/Qwen2.5-Coder-1.5B
Model Size1.5b
Required VRAM3.1 GB
Updated2024-12-21
Maintainerunsloth
Model Typeqwen2
Model Files  3.1 GB
Supported Languagesen
Generates CodeYes
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.44.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|PAD_TOKEN|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen2.5 Coder 1.5B

Best Alternatives
Context / RAM
Downloads
Likes
Replete Coder Qwen2 1.5B128K / 3.1 GB23323
Qwen2.5 Coder 1.5B Instruct32K / 3.1 GB2127251
Qwen2.5 Coder 1.5B32K / 3.1 GB1105635
Qwenftmodel32K / 6.2 GB220
Securin LLM V2.5 Qwen 1.5B32K / 3.1 GB220
Qwen2.5 Coder 1.5B Instruct32K / 3.1 GB18732
Qwen2 Coder Adamw Iter132K / 3.1 GB3710
Qwen2 Coder Adamw Iter232K / 3.1 GB3640
Qwen2 Coder Adamw Iter532K / 3.1 GB3310
Qwen2 Coder Reflct Adamw Iter232K / 3.1 GB2760
Note: green Score (e.g. "73.2") means that the model is better than unsloth/Qwen2.5-Coder-1.5B.

Rank the Qwen2.5 Coder 1.5B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217