LLM Name | Qwen2 1.5B Gguf F16 |
Repository ๐ค | https://huggingface.co/inflaton/Qwen2-1.5B-gguf-f16 |
Model Size | 1.5b |
Required VRAM | 3.1 GB |
Updated | 2024-12-06 |
Maintainer | inflaton |
Model Type | qwen2 |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | fp16|gguf |
Model Architecture | AutoModel |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen2 1.5B Instruct GGUF | 0K / 0.4 GB | 488704 | 9 |
Qwen2 1.5B Ita V2 | 0K / 0.1 GB | 24 | 0 |
Jenna V3 Qwen2 1.5 GGUF | 0K / 3.1 GB | 23 | 1 |
Jenna V3 Qwen2 1.5 GGUF Q4 | 0K / 1 GB | 18 | 0 |
Qwen2 1.5B MAC Gguf Q8 0 | 0K / 0.4 GB | 23 | 0 |
Qwen2 1.5B | 0K / 1.1 GB | 9 | 0 |
Qwen2 1.5B MAC Gguf Q4 K M | 0K / 0.3 GB | 22 | 0 |
Qwen2 1.5B MAC Gguf F16 | 0K / 3.1 GB | 17 | 0 |
Qwen2 1.5B Gguf Q4 K M | 0K / 0.7 GB | 12 | 0 |
Qwen2 1.5B MAC Gguf Q5 K M | 0K / 0.3 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐