๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
SUS Chat 34B GGUF | 74.6 | 0K / 14.6 GB | 267 | 16 |
Yi 34B GGUF | 69.7 | 0K / 14.6 GB | 567 | 73 |
Mergekit Slerp Fecxcxs GGUF | — | 0K / 8.4 GB | 153 | 0 |
...4B 200K DARE Megamerge V8 GGUF | — | 0K / 9.3 GB | 2045 | 11 |
CodeLlama 34B Hf GGUF | — | 0K / 12.5 GB | 174 | 3 |
StructLM 34B GGUF | — | 0K / 12.5 GB | 240 | 2 |
CodeLlama 34B Instruct Hf GGUF | — | 0K / 12.5 GB | 213 | 1 |
CodeLlama 34B Python Hf GGUF | — | 0K / 12.5 GB | 188 | 1 |
...y 34B 200K Chat Evaluator GGUF | — | 0K / 12.8 GB | 150 | 11 |
Deepmoney 34B 200K Base GGUF | — | 0K / 12.8 GB | 108 | 10 |
LLM Name | Mergekit Passthrough Zpfenfn GGUF |
Repository | Open on ๐ค |
Model Name | mergekit-passthrough-zpfenfn-GGUF |
Model Creator | mergekit-community |
Base Model(s) | |
Model Size | 34b |
Required VRAM | 8.7 GB |
Updated | 2024-07-06 |
Maintainer | MaziyarPanahi |
Model Type | mistral |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |