SmolLM2 1.7B by HuggingFaceTB

 ยป  All LLMs  ยป  HuggingFaceTB  ยป  SmolLM2 1.7B   URL Share it on

  Autotrain compatible   En   Endpoints compatible   Ext 8k   Llama   Region:us   Safetensors

SmolLM2 1.7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
SmolLM2 1.7B (HuggingFaceTB/SmolLM2-1.7B)

SmolLM2 1.7B Parameters and Internals

Model Type 
language model
Use Cases 
Areas:
text rewriting, summarization, function calling
Applications:
research, commercial applications
Limitations:
Primarily understands and generates content in English., Generated content may not be factually accurate, logically consistent, or free from biases., Should be used as assistive tools rather than definitive sources of information.
Considerations:
Users should verify important information and critically evaluate any generated content.
Additional Notes 
The instruct version of the model is tuned to support tasks beyond standard language modeling.
Supported Languages 
en (primary)
Training Details 
Data Sources:
FineWeb-Edu, DCLM, The Stack, new mathematics dataset, coding dataset
Data Volume:
11 trillion tokens
Methodology:
Supervised fine-tuning (SFT), Direct Preference Optimization (DPO)
Hardware Used:
256 H100 GPUs
Model Architecture:
Transformer decoder
Input Output 
Accepted Modalities:
text
Performance Tips:
None specified.
LLM NameSmolLM2 1.7B
Repository ๐Ÿค—https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B 
Model Size50b
Required VRAM3.4 GB
Updated2025-01-02
MaintainerHuggingFaceTB
Model Typellama
Model Files  3.4 GB
Supported Languagesen
Context Length8k
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassGPT2Tokenizer
Vocabulary Size49152
Torch Data Typebfloat16

Quantized Models of the SmolLM2 1.7B

Model
Likes
Downloads
VRAM
SmolTulu 1.7B Instruct133613 GB

Best Alternatives to SmolLM2 1.7B

Best Alternatives
Context / RAM
Downloads
Likes
Nyun C2 Llama3 50B8K / 100.8 GB2810

Rank the SmolLM2 1.7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40769 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227