LLM Name | Tinyllama 2B |
Repository ๐ค | https://huggingface.co/Aculi/Tinyllama-2B |
Base Model(s) | |
Model Size | 2b |
Required VRAM | 4.2 GB |
Updated | 2024-12-22 |
Maintainer | Aculi |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.42.3 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <unk> |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama 2B Hf 32768 Fpf | 32K / 3.8 GB | 498 | 1 |
...icpm 2B Sft Bf16 Llamafied 16K | 16K / 6 GB | 560 | 1 |
SmolLM2 MedIT Upscale 2B | 8K / 4.2 GB | 58 | 4 |
Salamandra 2B | 8K / 4.5 GB | 3749 | 19 |
Sarvam 2B V0.5 | 8K / 5.1 GB | 1210 | 82 |
Llama3 2B Base | 8K / 4.7 GB | 426 | 1 |
Test Quantized | 8K / 5.8 GB | 13 | 0 |
EPFL TA Meister Quantized V1 | 8K / 5.8 GB | 13 | 0 |
Llama3 Rommie | 8K / 5.8 GB | 17 | 0 |
...ta Llama 3 2B Mlp Layer Pruned | 8K / 5.1 GB | 43 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐