Qwen2.5 7B Instruct by unsloth

 ยป  All LLMs  ยป  unsloth  ยป  Qwen2.5 7B Instruct   URL Share it on

  Arxiv:2309.00071   Arxiv:2407.10671   Autotrain compatible Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-7b-ins...   Conversational   En   Endpoints compatible   Instruct   Qwen2   Region:us   Safetensors   Sharded   Tensorflow   Unsloth

Qwen2.5 7B Instruct Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 7B Instruct (unsloth/Qwen2.5-7B-Instruct)

Qwen2.5 7B Instruct Parameters and Internals

LLM NameQwen2.5 7B Instruct
Repository ๐Ÿค—https://huggingface.co/unsloth/Qwen2.5-7B-Instruct 
Base Model(s)  Qwen/Qwen2.5-7B-Instruct   Qwen/Qwen2.5-7B-Instruct
Model Size7b
Required VRAM15.2 GB
Updated2024-12-21
Maintainerunsloth
Model Typeqwen2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.3 GB: 3-of-4   1.1 GB: 4-of-4
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.44.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|PAD_TOKEN|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the Qwen2.5 7B Instruct

Model
Likes
Downloads
VRAM
....5 7B DPO Split1 16bit Chunk12033915 GB

Best Alternatives to Qwen2.5 7B Instruct

Best Alternatives
Context / RAM
Downloads
Likes
Cybertron V4 Qw7B UNAMGS128K / 15.2 GB22825
Gte Qwen2 7B Instruct128K / 30.5 GB41284235
Cybertron V4 Qw7B MGS128K / 15.2 GB27411
Rombos LLM V2.5 Qwen 7B128K / 15.2 GB30015
Qwen2 7B Instruct V0.1128K / 15.2 GB409681
Tsunami 0.5x 7B Instruct128K / 15.2 GB3641
Qwen2 7B Instruct V0.8128K / 15.2 GB409903
...areneg3Bv2 ECE PRYMMAL Martial128K / 15.2 GB601
Einstein V7 Qwen2 7B128K / 15.2 GB714535
Tsunami 0.5 7B Instruct128K / 15.2 GB3620
Note: green Score (e.g. "73.2") means that the model is better than unsloth/Qwen2.5-7B-Instruct.

Rank the Qwen2.5 7B Instruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217