LLM Name | Salamandra 2B Instruct |
Repository ๐ค | https://huggingface.co/BSC-LT/salamandra-2b-instruct |
Base Model(s) | |
Model Size | 2b |
Required VRAM | 4.5 GB |
Updated | 2025-02-05 |
Maintainer | BSC-LT |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Supported Languages | bg ca code cs cy da de el en es et eu fi fr ga gl hr hu it lt lv mt nl nn oc pl pt ro ru sh sk sl sr sv uk |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.40.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <unk> |
Vocabulary Size | 256000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
SmolLM2 MedIT Upscale 2B | 8K / 4.2 GB | 8 | 4 |
Llama3 2B Base | 8K / 4.7 GB | 83 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐