Qwen2.5 7B by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen2.5 7B   URL Share it on

  Arxiv:2407.10671   Autotrain compatible   Conversational   En   Endpoints compatible   Qwen2   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Qwen/Qwen2.5-7B 

Qwen2.5 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 7B (Qwen/Qwen2.5-7B)

Qwen2.5 7B Parameters and Internals

Model Type 
causal language model
Additional Notes 
We do not recommend using base language models for conversations.
Supported Languages 
English (proficient), Chinese (proficient), French (proficient), Spanish (proficient), Portuguese (proficient), German (proficient), Italian (proficient), Russian (proficient), Japanese (proficient), Korean (proficient), Vietnamese (proficient), Thai (proficient), Arabic (proficient)
Training Details 
Methodology:
Pretraining
Context Length:
131072
Model Architecture:
transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
LLM NameQwen2.5 7B
Repository ๐Ÿค—https://huggingface.co/Qwen/Qwen2.5-7B 
Model Size7b
Required VRAM15.4 GB
Updated2025-01-28
MaintainerQwen
Model Typeqwen2
Model Files  4.0 GB: 1-of-4   3.9 GB: 2-of-4   3.9 GB: 3-of-4   3.6 GB: 4-of-4
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.40.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the Qwen2.5 7B

Model
Likes
Downloads
VRAM
Qwen2.5 7B Instruct 4bit72764984 GB
Qwen2.5 7B Bnb 4bit2235675 GB
Qwen2.5 7B Instruct 8bit3798 GB

Best Alternatives to Qwen2.5 7B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M986K / 15.4 GB487110
Qwen2.5 7B RRP 1M986K / 15.2 GB163
Impish QWEN 7B 1M986K / 15.2 GB270
COCO 7B Instruct 1M986K / 15.2 GB07
DeepSeek R1 Distill Qwen 7B128K / 15.2 GB54977219
Qwen2 7B128K / 15.4 GB35853150
QwQ R1 Distill 7B CoT128K / 15.2 GB908
SakalFusion 7B Alpha128K / 15.2 GB930
Light 7B Beta128K / 15.2 GB581
Qwen 2.5 7B Deep Stock V1128K / 15.2 GB221
Note: green Score (e.g. "73.2") means that the model is better than Qwen/Qwen2.5-7B.

Rank the Qwen2.5 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42099 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227