Model Type |
| |
Additional Notes |
|
LLM Name | Mistral 7B Bnb 4bit Q4 K M |
Repository ๐ค | https://huggingface.co/AlekseyElygin/mistral-7b-bnb-4bit-q4_k_m |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 4.4 GB |
Updated | 2025-01-15 |
Maintainer | AlekseyElygin |
Model Type | mistral |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|q4|4bit|q4_k |
Model Architecture | AutoModel |
License | apache-2.0 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Pixel | 8K / 4.4 GB | 21 | 0 |
Mistral 7B Instruct V0.3 GGUF | 0K / 1.6 GB | 2349530 | 73 |
Qwen2 7B Instruct GGUF | 0K / 1.9 GB | 2304358 | 10 |
WizardLM 2 7B GGUF | 0K / 2.7 GB | 2305266 | 74 |
Deepthink Reasoning 7B GGUF | 0K / 4.7 GB | 1361 | 9 |
QwQ LCoT 7B Instruct GGUF | 0K / 4.7 GB | 966 | 7 |
Qwen UMLS 7B Instruct GGUF | 0K / 4.7 GB | 836 | 8 |
Conversely Mistral 7B | 0K / 0.2 GB | 39 | 0 |
Mistral 7B Instruct V0.3 GGUF | 0K / 2.7 GB | 57078 | 8 |
Mistral 7B Instruct V0.2 GGUF | 0K / 3.1 GB | 94517 | 411 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐