LLM Name | Deepseekq |
Repository ๐ค | https://huggingface.co/Daemontatox/Deepseekq |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 4.2 GB |
Updated | 2024-08-11 |
Maintainer | Ammartatox |
Model Type | llama |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |
License | apache-2.0 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Pixel | 8K / 4.4 GB | 69 | 0 |
Mistral 7B Instruct V0.3 GGUF | 0K / 1.6 GB | 100292 | 58 |
Qwen2 7B Instruct GGUF | 0K / 1.9 GB | 189850 | 7 |
WizardLM 2 7B GGUF | 0K / 2.7 GB | 95894 | 73 |
Mistral 7B Instruct V0.3 GGUF | 0K / 2.7 GB | 72963 | 5 |
Mistral 7B Instruct V0.2 GGUF | 0K / 3.1 GB | 95769 | 394 |
Qwen2 7B Instruct V0.6 GGUF | 0K / 4.5 GB | 13522 | 0 |
Qwen2 7B Instruct V0.1 GGUF | 0K / 4.5 GB | 9714 | 0 |
Qwen2 7B Instruct V0.7 GGUF | 0K / 4.5 GB | 9530 | 0 |
Qwen2 7B Instruct V0.3 GGUF | 0K / 4.5 GB | 8911 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐