Model Type |
| |||||||||||||||
Use Cases |
| |||||||||||||||
Additional Notes |
| |||||||||||||||
Supported Languages |
| |||||||||||||||
Training Details |
| |||||||||||||||
Input Output |
|
LLM Name | Minicpm 2B Sft Bf16 Llamafied 16K |
Repository ๐ค | https://huggingface.co/NurtureAI/minicpm-2b-sft-bf16-llamafied-16k |
Model Size | 2b |
Required VRAM | 6 GB |
Updated | 2024-12-08 |
Maintainer | NurtureAI |
Model Type | llama |
Model Files | |
Supported Languages | en zh |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 16384 |
Model Max Length | 16384 |
Transformers Version | 4.36.2 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 122753 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama 2B Hf 32768 Fpf | 32K / 3.8 GB | 175 | 1 |
YuLan Mini | 28K / 4.8 GB | 322 | 35 |
Salamandra 2B | 8K / 4.5 GB | 10277 | 22 |
SmolLM2 MedIT Upscale 2B | 8K / 4.2 GB | 8 | 4 |
Salamandra 2B Instruct | 8K / 4.5 GB | 2989 | 18 |
Sarvam 2B V0.5 | 8K / 5.1 GB | 634 | 83 |
Test Quantized | 8K / 5.8 GB | 76 | 0 |
EPFL TA Meister Quantized V1 | 8K / 5.8 GB | 78 | 0 |
Llama3 2B Base | 8K / 4.7 GB | 83 | 1 |
Llama3 Rommie | 8K / 5.8 GB | 76 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐