LLM Name | SmolLM 1.7B |
Repository ๐ค | https://huggingface.co/HuggingFaceTB/SmolLM-1.7B |
Model Size | 1.7b |
Required VRAM | 6.9 GB |
Updated | 2025-02-14 |
Maintainer | HuggingFaceTB |
Model Type | llama |
Model Files | |
Supported Languages | en |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.39.3 |
Tokenizer Class | GPT2Tokenizer |
Vocabulary Size | 49152 |
Torch Data Type | float32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
SmolLM2 1.7B Instruct | 8K / 3.4 GB | 123770 | 527 |
SmolLM2 1.7B | 8K / 3.4 GB | 69653 | 101 |
SmolTulu 1.7B Reinforced | 8K / 3.4 GB | 322 | 5 |
...ghts Lite 1.8B Experimental O1 | 8K / 3.6 GB | 137 | 1 |
SmolLM2 1.7B Instruct | 8K / 3.4 GB | 9416 | 4 |
SmolLM2 1.7B | 8K / 3.4 GB | 9348 | 4 |
SmolLM2 1.7 Persona | 8K / 3.5 GB | 8 | 0 |
NuExtract 1.5 Smol | 8K / 3.4 GB | 196 | 54 |
...RM 1 Smollm2 1.7B Lcot PyTorch | 8K / 3.4 GB | 169 | 0 |
SmolLM2 Math IIO 1.7B Instruct | 8K / 3.4 GB | 188 | 8 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐