๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
SUS Chat 34B GGUF | 74.6 | 0K / 14.6 GB | 340 | 16 |
Yi 34B GGUF | 69.7 | 0K / 14.6 GB | 547 | 73 |
Mergekit Slerp Fecxcxs GGUF | — | 0K / 8.4 GB | 141 | 0 |
...gekit Passthrough Zpfenfn GGUF | — | 0K / 8.7 GB | 155 | 0 |
...4B 200K DARE Megamerge V8 GGUF | — | 0K / 9.3 GB | 2484 | 11 |
CodeLlama 34B Hf GGUF | — | 0K / 12.5 GB | 177 | 3 |
StructLM 34B GGUF | — | 0K / 12.5 GB | 263 | 2 |
CodeLlama 34B Python Hf GGUF | — | 0K / 12.5 GB | 197 | 1 |
CodeLlama 34B Instruct Hf GGUF | — | 0K / 12.5 GB | 197 | 1 |
...y 34B 200K Chat Evaluator GGUF | — | 0K / 12.8 GB | 340 | 11 |
LLM Name | Yi 34B V3 GGUF |
Repository | Open on ๐ค |
Model Name | Yi 34B v3 |
Model Creator | MindsAndCompany |
Base Model(s) | |
Model Size | 34b |
Required VRAM | 14.6 GB |
Updated | 2024-07-04 |
Maintainer | TheBloke |
Model Type | yi |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |
License | other |