Qwen2.5 Math 7B by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen2.5 Math 7B   URL Share it on

  Arxiv:2409.12122   Autotrain compatible Base model:finetune:qwen/qwen2...   Base model:qwen/qwen2.5-7b   Conversational   En   Endpoints compatible   Qwen2   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Qwen/Qwen2.5-Math-7B 

Qwen2.5 Math 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 Math 7B (Qwen/Qwen2.5-Math-7B)

Qwen2.5 Math 7B Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
mathematics
Applications:
problem solving in math
Primary Use Cases:
solving English and Chinese math problems through Chain-of-Thought (CoT) and Tool-Integrated Reasoning (TIR)
Limitations:
Not recommended for tasks outside solving math problems
Supported Languages 
en (English), zh (Chinese)
LLM NameQwen2.5 Math 7B
Repository ๐Ÿค—https://huggingface.co/Qwen/Qwen2.5-Math-7B 
Base Model(s)  Qwen/Qwen2.5-7B   Qwen/Qwen2.5-7B
Model Size7b
Required VRAM15.4 GB
Updated2025-02-22
MaintainerQwen
Model Typeqwen2
Model Files  4.0 GB: 1-of-4   3.9 GB: 2-of-4   3.9 GB: 3-of-4   3.6 GB: 4-of-4
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.44.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the Qwen2.5 Math 7B

Model
Likes
Downloads
VRAM
Qwen2.5 Math 7B Bnb 4bit212425 GB

Best Alternatives to Qwen2.5 Math 7B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M986K / 15.4 GB289038236
Qwen2.5 7B MixStock V0.1986K / 15.2 GB6823
Qwen2.5 7B RRP 1M986K / 15.2 GB2944
Qwen2.5 7B CelestialHarmony 1M986K / 14.8 GB1535
Qwen 2.5 7B Exp Sce986K / 15.2 GB282
COCO 7B Instruct 1M986K / 15.2 GB1059
SJT 7B V1.1986K / 14.8 GB1521
Q2.5 Instruct 1M Harmony986K / 15.2 GB611
Impish QWEN 7B 1M986K / 15.2 GB701
Qwen 2.5 7B Deep Stock V5986K / 15.2 GB302
Note: green Score (e.g. "73.2") means that the model is better than Qwen/Qwen2.5-Math-7B.

Rank the Qwen2.5 Math 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227