LLM Name | Tinyllama Gguf 16B |
Repository ๐ค | https://huggingface.co/abdymazhit/tinyllama-gguf-16b |
Base Model(s) | |
Model Size | 16b |
Required VRAM | 2.2 GB |
Updated | 2024-09-16 |
Maintainer | abdymazhit |
Model Type | llama |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|4bit |
Model Architecture | AutoModel |
License | apache-2.0 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama 3 16B Instruct V0.1 GGUF | 0K / 6.4 GB | 509 | 9 |
Nanbeige 16B Chat 32K GGUF | 0K / 6.6 GB | 144 | 6 |
Nanbeige 16B Chat GGUF | 0K / 6.6 GB | 115 | 1 |
Nanbeige 16B Base GGUF | 0K / 6.6 GB | 114 | 1 |
Nanbeige 16B Base 32K GGUF | 0K / 6.6 GB | 87 | 3 |
Ct2fast Codegen2 16B | 0K / 32.1 GB | 1 | 1 |
Ct2fast Codegen 16B Mono | 0K / 32.1 GB | 1 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐