Qwen2.5 14B by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen2.5 14B   URL Share it on

  Arxiv:2407.10671   Conversational   En   Qwen2   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Qwen/Qwen2.5-14B 

Qwen2.5 14B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 14B (Qwen/Qwen2.5-14B)

Qwen2.5 14B Parameters and Internals

Model Type 
causal language model
Additional Notes 
We do not recommend using base language models for conversations.
Supported Languages 
English (proficient), Chinese (proficient), French (proficient), Spanish (proficient), Portuguese (proficient), German (proficient), Italian (proficient), Russian (proficient), Japanese (proficient), Korean (proficient), Vietnamese (proficient), Thai (proficient), Arabic (proficient)
Training Details 
Methodology:
Pretraining
Context Length:
131072
Model Architecture:
transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
LLM NameQwen2.5 14B
Repository ๐Ÿค—https://huggingface.co/Qwen/Qwen2.5-14B 
Model Size14b
Required VRAM29.6 GB
Updated2025-02-05
MaintainerQwen
Model Typeqwen2
Model Files  3.9 GB: 1-of-8   4.0 GB: 2-of-8   4.0 GB: 3-of-8   4.0 GB: 4-of-8   4.0 GB: 5-of-8   4.0 GB: 6-of-8   4.0 GB: 7-of-8   1.7 GB: 8-of-8
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.43.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the Qwen2.5 14B

Model
Likes
Downloads
VRAM
Qwen2.5 14B Instruct 4bit77139698 GB
Qwen2.5 14B Bnb 4bit552119 GB
Qwen2.5 14B Instruct 8bit23815 GB

Best Alternatives to Qwen2.5 14B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 14B Instruct 1M986K / 29.6 GB10662218
Qwen2.5 14B DeepSeek R1 1M986K / 29.7 GB5417
Impish QWEN 14B 1M986K / 29.7 GB9210
Calcium Opus 14B Elite 1M986K / 29.7 GB4610
Q2.5 14B Instruct 1M Harmony986K / 29.7 GB101
...wen2.5 14B Instruct 1M Unalign986K / 29.7 GB2890
Mergekit Model Stock Injkqri986K / 29.7 GB141
DeepSeek R1 Distill Qwen 14B128K / 29.6 GB173836311
Virtuoso Small V2128K / 29.7 GB46021
Virtuoso Small128K / 29.6 GB710765
Note: green Score (e.g. "73.2") means that the model is better than Qwen/Qwen2.5-14B.

Rank the Qwen2.5 14B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42565 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227