DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged F32 by MilyaShams

 ยป  All LLMs  ยป  MilyaShams  ยป  DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged F32   URL Share it on

  Merged Model   Autotrain compatible   Conversational   En   Endpoints compatible   Qwen2   Region:us   Safetensors   Trl   Unsloth

DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged F32 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged F32 (MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-medical-continual-pretrain-merged-f32)

DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged F32 Parameters and Internals

LLM NameDeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged F32
Repository ๐Ÿค—https://huggingface.co/MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-medical-continual-pretrain-merged-f32 
Base Model(s)  unsloth/deepseek-r1-distill-qwen-1.5b   unsloth/deepseek-r1-distill-qwen-1.5b
Merged ModelYes
Model Size1.5b
Required VRAM3.5 GB
Updated2025-04-10
MaintainerMilyaShams
Model Typeqwen2
Model Files  3.5 GB
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.51.0
Tokenizer ClassLlamaTokenizerFast
Padding Token<|vision_pad|>
Vocabulary Size151936
Torch Data Typebfloat16

Best Alternatives to DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged F32

Best Alternatives
Context / RAM
Downloads
Likes
ReaderLM V2500K / 3.1 GB63359619
Reader Lm 1.5B250K / 3.1 GB584596
DeepSeek R1 Distill Qwen 1.5B128K / 3.5 GB18195351176
DeepScaleR 1.5B Preview128K / 7.1 GB70796549
DeepCoder 1.5B Preview128K / 7.1 GB284163
Qwen2.5 1.5B128K / 3.1 GB75466799
ZR1 1.5B128K / 7.1 GB145763
OpenMath Nemotron 1.5B128K / 3.1 GB66116
Qwen2 1.5B128K / 3.1 GB20601191
AlphaMaze V0.2 1.5B128K / 3.5 GB260992
Note: green Score (e.g. "73.2") means that the model is better than MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-medical-continual-pretrain-merged-f32.

Rank the DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged F32 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46860 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227