Qwen2.5 1.5B by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen2.5 1.5B   URL Share it on

  Arxiv:2407.10671   Autotrain compatible   Conversational   En   Endpoints compatible   Qwen2   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/Qwen/Qwen2.5-1.5B 

Qwen2.5 1.5B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 1.5B (Qwen/Qwen2.5-1.5B)

Qwen2.5 1.5B Parameters and Internals

Model Type 
causal language model
Use Cases 
Limitations:
Not recommended for conversations without post-training
Additional Notes 
The base 0.5B Qwen2.5 model requires post-training for certain use cases such as conversations.
Supported Languages 
en (Proficient), zh (Proficient), fr (Proficient), es (Proficient), pt (Proficient), de (Proficient), it (Proficient), ru (Proficient), ja (Proficient), ko (Proficient), vi (Proficient), th (Proficient), ar (Proficient)
Training Details 
Methodology:
Pretraining
Context Length:
32768
Model Architecture:
Transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
LLM NameQwen2.5 1.5B
Repository ๐Ÿค—https://huggingface.co/Qwen/Qwen2.5-1.5B 
Model Size1.5b
Required VRAM3.1 GB
Updated2025-02-05
MaintainerQwen
Model Typeqwen2
Model Files  3.1 GB
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.40.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen2.5 1.5B

Best Alternatives
Context / RAM
Downloads
Likes
ReaderLM V2500K / 3.5 GB23287473
Reader Lm 1.5B250K / 3.1 GB8819584
DeepSeek R1 Distill Qwen 1.5B128K / 3.5 GB386547659
...Seek R1 Distill Qwen 1.5B ONNX128K /  GB3922337
...ek R1 ReDistill Qwen 1.5B V1.0128K / 3.6 GB30742
Stella En 1.5B V5128K / 6.2 GB581890211
DeepSeek R1 Distill Qwen 1.5B128K / 3.5 GB48586
AceInstruct 1.5B128K / 3.5 GB37410
Bellatrix 1.5B XElite128K / 3.5 GB2218
QwQ R1 Distill 1.5B CoT128K / 3.5 GB1789
Note: green Score (e.g. "73.2") means that the model is better than Qwen/Qwen2.5-1.5B.

Rank the Qwen2.5 1.5B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227