LLM Name | DeepSeek R1 Distill Qwen 32B |
Repository 🤗 | https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B |
Model Size | 32b |
Required VRAM | 65.7 GB |
Updated | 2025-03-11 |
Maintainer | deepseek-ai |
Model Type | qwen2 |
Model Files | |
Model Architecture | Qwen2ForCausalLM |
License | mit |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.43.1 |
Tokenizer Class | LlamaTokenizerFast |
Beginning of Sentence Token | <|begin▁of▁sentence|> |
End of Sentence Token | <|end▁of▁sentence|> |
Vocabulary Size | 152064 |
Torch Data Type | bfloat16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
...pSeek R1 Distill Qwen 32B 4bit | 36 | 447918 | 18 GB |
...k R1 Distill Qwen 32B Bnb 4bit | 23 | 62556 | 19 GB |
...epSeek R1 Distill Qwen 32B AWQ | 24 | 23078 | 19 GB |
...till Qwen 32B Unsloth Bnb 4bit | 9 | 5280 | 35 GB |
PathFinderAI S1 | 0 | 251 | 65 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Openbuddy Qwq 32B V24.2 200K | 195K / 65.8 GB | 76 | 3 |
Openbuddy Qwq 32B V24.1 200K | 195K / 65.8 GB | 78 | 3 |
...y Qwen2.5coder 32B V24.1q 200K | 195K / 65.8 GB | 13 | 2 |
QwQ 32B | 128K / 65.8 GB | 132036 | 1867 |
TinyR1 32B Preview | 128K / 65.6 GB | 5106 | 315 |
Qwen2.5 32B | 128K / 65.5 GB | 100110 | 122 |
RomboUltima 32B | 128K / 20.7 GB | 146 | 2 |
...k R1 Distill Qwen 32B Japanese | 128K / 65.8 GB | 7362 | 244 |
Ultiima 32B | 128K / 65.8 GB | 334 | 5 |
...wen2.5 32B Inst BaseMerge TIES | 128K / 65.8 GB | 122 | 13 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟