Model Type |
| |
Additional Notes |
|
LLM Name | HyperAutoGGUF Q4 |
Repository ๐ค | https://huggingface.co/HyperdustProtocol/HyperAutoGGUF-q4 |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 4.1 GB |
Updated | 2024-12-21 |
Maintainer | HyperdustProtocol |
Model Type | llama |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|q4|4bit |
Model Architecture | AutoModel |
License | apache-2.0 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Pixel | 8K / 4.4 GB | 19 | 0 |
Mistral 7B Instruct V0.3 GGUF | 0K / 1.6 GB | 2031016 | 68 |
Qwen2 7B Instruct GGUF | 0K / 1.9 GB | 1968431 | 10 |
QwQ LCoT 7B Instruct GGUF | 0K / 4.7 GB | 326 | 7 |
Qwen UMLS 7B Instruct GGUF | 0K / 4.7 GB | 991 | 8 |
WizardLM 2 7B GGUF | 0K / 2.7 GB | 1993468 | 74 |
Conversely Mistral 7B | 0K / 0.2 GB | 294 | 0 |
Neumind Math 7B Instruct GGUF | 0K / 4.7 GB | 154 | 7 |
Mistral 7B Instruct V0.2 GGUF | 0K / 3.1 GB | 131566 | 407 |
Mistral 7B Instruct V0.3 GGUF | 0K / 2.7 GB | 61551 | 8 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐