Qwen2.5 0.5B by unsloth

 ยป  All LLMs  ยป  unsloth  ยป  Qwen2.5 0.5B   URL Share it on

  Arxiv:2407.10671   Autotrain compatible Base model:finetune:qwen/qwen2...   Base model:qwen/qwen2.5-0.5b   En   Endpoints compatible   Qwen2   Region:us   Safetensors   Unsloth
Model Card on HF ๐Ÿค—: https://huggingface.co/unsloth/Qwen2.5-0.5B 

Qwen2.5 0.5B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 0.5B (unsloth/Qwen2.5-0.5B)

Qwen2.5 0.5B Parameters and Internals

Model Type 
Causal Language Models
Use Cases 
Areas:
Research, Commercial applications
Applications:
Text generation, Multilingual translation, Coding assistance, Mathematical computations
Limitations:
Not recommended for conversational models without post-training such as SFT, RLHF, etc.
Considerations:
Post-training is recommended for specialized conversational use cases.
Additional Notes 
Qwen2.5 features include long-context support up to 128K tokens and can generate up to 8K tokens. It has significantly improved capabilities in instruction following, coding, and mathematics.
Supported Languages 
languages_supported (:[), Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. ({{"languages_supported":""{"), description":"Supported languages ":null , null ] (), proficiency_level (N/A)
Training Details 
Data Sources:
Multiple expert data sources in coding, mathematics, and multilingual domains
Methodology:
Pre-training with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
Context Length:
32768
Model Architecture:
transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
LLM NameQwen2.5 0.5B
Repository ๐Ÿค—https://huggingface.co/unsloth/Qwen2.5-0.5B 
Base Model(s)  Qwen/Qwen2.5-0.5B   Qwen/Qwen2.5-0.5B
Model Size0.5b
Required VRAM1 GB
Updated2024-12-21
Maintainerunsloth
Model Typeqwen2
Model Files  1.0 GB
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.44.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|PAD_TOKEN|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen2.5 0.5B

Best Alternatives
Context / RAM
Downloads
Likes
Reader Lm 0.5B250K / 1 GB669127
Qwen2 0.5B128K / 1 GB1504690119
...PRYMMAL 0.5B FT V4 MUSR Mathis128K / 1 GB621
...0.5B FT EnhancedMUSREnsembleV3128K / 1 GB321
....5B FT V4 MUSR ENSEMBLE Mathis128K / 2 GB301
...0.5B FT MUSR ENSEMBLE V2Mathis128K / 1 GB250
ECE PRYMMAL 0.5B FT V3 MUSR128K / 2 GB310
ECE PRYMMAL 0.5B FT V4 MUSR128K / 2 GB260
Qwen2 0.5B128K / 1 GB34382
...0.5B FT EnhancedMUSREnsembleV3128K / 1 GB261
Note: green Score (e.g. "73.2") means that the model is better than unsloth/Qwen2.5-0.5B.

Rank the Qwen2.5 0.5B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217