LLM Name | WhiteRabbitNeo 33B V1 GGUF |
Repository ๐ค | https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF |
Model Name | WhiteRabbitNeo 33B v1 |
Model Creator | WhiteRabbitNeo |
Base Model(s) | |
Model Size | 33b |
Required VRAM | 12.3 GB |
Updated | 2024-09-16 |
Maintainer | TheBloke |
Model Type | deepseek |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |
License | other |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...epseek Coder 33B Instruct GGUF | 0K / 14 GB | 4651 | 158 |
WizardCoder 33B V1.1 GGUF | 0K / 14 GB | 1048 | 44 |
Everyone Coder 33B Base GGUF | 0K / 12.4 GB | 438 | 12 |
Deepseek Coder 33B Base GGUF | 0K / 14 GB | 1624 | 8 |
Code 33B GGUF | 0K / 13.5 GB | 517 | 5 |
Python Code 33B GGUF | 0K / 13.5 GB | 858 | 3 |
Vicuna 33B Coder GGUF | 0K / 13.5 GB | 758 | 6 |
Vicuna 33B GGUF | 0K / 13.5 GB | 685 | 16 |
Chronoboros 33B GGUF | 0K / 13.5 GB | 208 | 8 |
Chronos 33B GGUF | 0K / 13.5 GB | 250 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐