40K Continued Pretraining Qwen2.5 0.5B Instruct Unsloth Merged 16bit by FlofloB

 ยป  All LLMs  ยป  FlofloB  ยป  40K Continued Pretraining Qwen2.5 0.5B Instruct Unsloth Merged 16bit   URL Share it on

  Merged Model   4bit   Autotrain compatible   Conversational Dataset:uncovai/fineweb cc-mai...   En   Endpoints compatible   Instruct   Pytorch   Quantized   Qwen2   Region:us   Sft   Trl   Unsloth

40k Continued Pretraining Qwen2.5 0.5B Instruct Unsloth Merged 16bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
40K Continued Pretraining Qwen2.5 0.5B Instruct Unsloth Merged 16bit (FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit)

40K Continued Pretraining Qwen2.5 0.5B Instruct Unsloth Merged 16bit Parameters and Internals

LLM Name40k Continued Pretraining Qwen2.5 0.5B Instruct Unsloth Merged 16bit
Repository ๐Ÿค—https://huggingface.co/FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit 
Base Model(s)  unsloth/qwen2.5-0.5b-instruct-bnb-4bit   unsloth/qwen2.5-0.5b-instruct-bnb-4bit
Merged ModelYes
Model Size0.5b
Required VRAM1.3 GB
Updated2024-12-21
MaintainerFlofloB
Model Typeqwen2
Instruction-BasedYes
Model Files  1.3 GB
Supported Languagesen
Quantization Type4bit
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.46.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|PAD_TOKEN|>
Vocabulary Size151936
Torch Data Typefloat16
Errorsreplace

Best Alternatives to 40K Continued Pretraining Qwen2.5 0.5B Instruct Unsloth Merged 16bit

Best Alternatives
Context / RAM
Downloads
Likes
Acrux 500M O1 Journey32K / 1 GB2477
Qwen2.5 0.5B Instruct Bnb 4bit32K / 0.5 GB99552
... Instruct Unsloth Merged 16bit32K / 1.3 GB641
... Instruct Unsloth Merged 16bit32K / 1.3 GB531
... Instruct Unsloth Merged 16bit32K / 1.3 GB301
...en2.5 Coder 0.5B Instruct 4bit32K / 0.3 GB1252
Qwen2 0.5B Instruct Bnb 4bit32K / 0.5 GB59384
Qwen2.5 0.5B Instruct 4bit32K / 0.3 GB2972
Qwen2 0.5B 16bit32K / 1 GB520
....5B Instruct Bnb 4bit MAC Lora32K / 0.5 GB91
Note: green Score (e.g. "73.2") means that the model is better than FlofloB/40k_continued_pretraining_Qwen2.5-0.5B-Instruct_Unsloth_merged_16bit.

Rank the 40K Continued Pretraining Qwen2.5 0.5B Instruct Unsloth Merged 16bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217