LLM Name | Yi 9B GGUF |
Repository ๐ค | https://huggingface.co/TouchNight/Yi-9B-GGUF |
Model Size | 9b |
Required VRAM | 3.4 GB |
Updated | 2025-03-15 |
Maintainer | TouchNight |
Model Type | llama |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf|q2|q4_k|q5_k |
Model Architecture | AutoModel |
License | apache-2.0 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...e Dolphin 2.9.1 Yi 1.5 9b GGUF | 0K / 5.3 GB | 22 | 0 |
Yi 1.5 9B Chat GGUF | 0K / 3.4 GB | 73 | 5 |
...n Dolphin 2.9.1 Yi 1.5 9b GGUF | 0K / 5.3 GB | 41 | 0 |
...s Dolphin 2.9.1 Yi 1.5 9b GGUF | 0K / 5.3 GB | 16 | 0 |
Yi 9B 200K GGUF | 0K / 3.4 GB | 358 | 6 |
Yi 9B GGUF | 0K / 3.4 GB | 336 | 14 |
Yi Super 9B GGUF | 0K / 3.4 GB | 216 | 4 |
NeuralQuant 9B GGUF | 0K / 3.8 GB | 337 | 1 |
NeuralPipe 9B Merged GGUF | 0K / 3.8 GB | 92 | 2 |
...a 2 9B Chinese Chat Uncensored | 0K / 18.6 GB | 13549 | 29 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐