LLM Name | Distilled Chat Math |
Repository ๐ค | https://huggingface.co/niteshagarwala/distilled_chat_math |
Base Model(s) | |
Model Size | 135m |
Required VRAM | 0.5 GB |
Updated | 2025-03-18 |
Maintainer | niteshagarwala |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.45.2 |
Tokenizer Class | GPT2Tokenizer |
Padding Token | <|im_end|> |
Vocabulary Size | 49152 |
Torch Data Type | float32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
SmolLM2 135M Instruct | 8K / 0.3 GB | 198312 | 156 |
SmolLM2 135M Grpo Gsm8k | 8K / 0.5 GB | 412 | 4 |
...rtis SmolLM2 135M Instruct DPO | 8K / 0.5 GB | 242 | 0 |
Reasoning SmolLM2 135M | 8K / 0.5 GB | 695 | 5 |
Kurtis SmolLM2 135M Instruct | 8K / 0.5 GB | 56 | 0 |
Jaja Small V4 | 8K / 0.5 GB | 109 | 0 |
...wre324 R1 SmolLM2 135M Distill | 8K / 0.5 GB | 17 | 0 |
...molLM2 135M Instruct Reasoning | 8K / 0.3 GB | 7 | 0 |
SmolLM2 135M Instruct | 8K / 0.3 GB | 581 | 2 |
Smollm 135M Full Fineweb Is | 8K / 0.5 GB | 6 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐