SmolLM2 360M Synthetic Concise Reasoning by argilla

 ยป  All LLMs  ยป  argilla  ยป  SmolLM2 360M Synthetic Concise Reasoning   URL Share it on

  Autotrain compatible Base model:finetune:huggingfac... Base model:huggingfacetb/smoll...   Conversational   Datacraft Dataset:argilla/synthetic-conc...   En   Endpoints compatible   Generated from trainer   Llama   Region:us   Safetensors   Sft   Tensorboard   Trl

SmolLM2 360M Synthetic Concise Reasoning Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
SmolLM2 360M Synthetic Concise Reasoning (argilla/SmolLM2-360M-synthetic-concise-reasoning)

SmolLM2 360M Synthetic Concise Reasoning Parameters and Internals

LLM NameSmolLM2 360M Synthetic Concise Reasoning
Repository ๐Ÿค—https://huggingface.co/argilla/SmolLM2-360M-synthetic-concise-reasoning 
Model NameSmolLM2-360M-synthetic-concise-reasoning
Base Model(s)  SmolLM2 360M   HuggingFaceTB/SmolLM2-360M
Model Size360m
Required VRAM1.4 GB
Updated2025-04-16
Maintainerargilla
Model Typellama
Model Files  1.4 GB   0.0 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.48.0.dev0
Tokenizer ClassGPT2Tokenizer
Padding Token<|im_end|>
Vocabulary Size49152
Torch Data Typefloat32

Best Alternatives to SmolLM2 360M Synthetic Concise Reasoning

Best Alternatives
Context / RAM
Downloads
Likes
SmolLM2 360M Instruct8K / 0.7 GB682065111
SmolLM2 360M8K / 0.7 GB11100245
Smollm2 360M Sft SmallThoughts8K / 0.7 GB791
...n Combined Instruction Dataset8K / 1.4 GB260
... Cpt Fineweb Norwegian Nynorsk8K / 1.4 GB280
SmolLM2 CoT 360M8K / 1.4 GB389
SmolLM2 360M Instruct8K / 0.7 GB21781
SmolLM2 360M Grpo R9998K / 1.4 GB243
Smol Hub Tldr8K / 0.7 GB189
SmolLM2 360M8K / 0.7 GB11470
Note: green Score (e.g. "73.2") means that the model is better than argilla/SmolLM2-360M-synthetic-concise-reasoning.

Rank the SmolLM2 360M Synthetic Concise Reasoning Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46599 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227