SmolLM2 1.7B by HuggingFaceTB

 ยป  All LLMs  ยป  HuggingFaceTB  ยป  SmolLM2 1.7B   URL Share it on

  Arxiv:2502.02737   Autotrain compatible   En   Endpoints compatible   Llama   Region:us   Safetensors

SmolLM2 1.7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
SmolLM2 1.7B (HuggingFaceTB/SmolLM2-1.7B)

SmolLM2 1.7B Parameters and Internals

Model Type 
language model
Use Cases 
Areas:
text rewriting, summarization, function calling
Applications:
research, commercial applications
Limitations:
Primarily understands and generates content in English., Generated content may not be factually accurate, logically consistent, or free from biases., Should be used as assistive tools rather than definitive sources of information.
Considerations:
Users should verify important information and critically evaluate any generated content.
Additional Notes 
The instruct version of the model is tuned to support tasks beyond standard language modeling.
Supported Languages 
en (primary)
Training Details 
Data Sources:
FineWeb-Edu, DCLM, The Stack, new mathematics dataset, coding dataset
Data Volume:
11 trillion tokens
Methodology:
Supervised fine-tuning (SFT), Direct Preference Optimization (DPO)
Hardware Used:
256 H100 GPUs
Model Architecture:
Transformer decoder
Input Output 
Accepted Modalities:
text
Performance Tips:
None specified.
LLM NameSmolLM2 1.7B
Repository ๐Ÿค—https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B 
Model Size1.7b
Required VRAM3.4 GB
Updated2025-02-22
MaintainerHuggingFaceTB
Model Typellama
Model Files  3.4 GB
Supported Languagesen
Context Length8k
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassGPT2Tokenizer
Vocabulary Size49152
Torch Data Typebfloat16

Quantized Models of the SmolLM2 1.7B

Model
Likes
Downloads
VRAM
SmolTulu 1.7B Instruct131943 GB
SmolLM2 1.7B Bnb 4bit32961 GB

Best Alternatives to SmolLM2 1.7B

Best Alternatives
Context / RAM
Downloads
Likes
SmolLM2 1.7B Instruct8K / 3.4 GB380890549
Superthoughts Lite V18K / 3.4 GB10652
SmolTulu 1.7B Reinforced8K / 3.4 GB2425
...ghts Lite 1.8B Experimental O18K / 3.6 GB2401
SmolLM2 1.7B Instruct8K / 3.4 GB85494
SmolLM2 1.7B8K / 3.4 GB71854
SmolLM2 1.7 Persona8K / 3.5 GB80
NuExtract 1.5 Smol8K / 3.4 GB23754
...RM 1 Smollm2 1.7B Lcot PyTorch8K / 3.4 GB750
SmolLM2 Math IIO 1.7B Instruct8K / 3.4 GB1078
Note: green Score (e.g. "73.2") means that the model is better than HuggingFaceTB/SmolLM2-1.7B.

Rank the SmolLM2 1.7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227