LLM Name | Yi 9B GGUF |
Repository ๐ค | https://huggingface.co/TouchNight/Yi-9B-GGUF |
Model Size | 9b |
Required VRAM | 3.4 GB |
Updated | 2025-02-22 |
Maintainer | TouchNight |
Model Type | llama |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf|q2|q4_k|q5_k |
Model Architecture | AutoModel |
License | apache-2.0 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Yi 1.5 9B Chat GGUF | 0K / 3.4 GB | 89 | 5 |
...e Dolphin 2.9.1 Yi 1.5 9b GGUF | 0K / 5.3 GB | 22 | 0 |
...n Dolphin 2.9.1 Yi 1.5 9b GGUF | 0K / 5.3 GB | 41 | 0 |
...s Dolphin 2.9.1 Yi 1.5 9b GGUF | 0K / 5.3 GB | 16 | 0 |
Yi 9B GGUF | 0K / 3.4 GB | 327 | 14 |
Yi Super 9B GGUF | 0K / 3.4 GB | 243 | 4 |
Yi 9B 200K GGUF | 0K / 3.4 GB | 170 | 6 |
NeuralQuant 9B GGUF | 0K / 3.8 GB | 83 | 1 |
NeuralPipe 9B Merged GGUF | 0K / 3.8 GB | 55 | 2 |
...a 2 9B Chinese Chat Uncensored | 0K / 18.6 GB | 21371 | 29 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐