LLM Name | Mistral 7B Instruct V0.2 Python 18K |
Repository ๐ค | https://huggingface.co/theprint/Mistral-7b-Instruct-v0.2-python-18k |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 14.4 GB |
Updated | 2024-10-18 |
Maintainer | theprint |
Instruction-Based | Yes |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | 4bit|gguf |
Model Architecture | AutoModel |
License | apache-2.0 |
Model Max Length | 131072 |
Is Biased | none |
Tokenizer Class | LlamaTokenizer |
Padding Token | <unk> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | k_proj|gate_proj|o_proj|up_proj|q_proj|down_proj|v_proj |
LoRA Alpha | 32 |
LoRA Dropout | 0 |
R Param | 16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Pixel | 8K / 4.4 GB | 63 | 0 |
Mistral 7B Instruct V0.2 GGUF | 0K / 3.1 GB | 95301 | 389 |
Mistral 7B Instruct V0.3 GGUF | 0K / 2.7 GB | 75620 | 3 |
Mistral 7B Instruct V0.3 GGUF | 0K / 1.6 GB | 7892 | 58 |
Qwen2 7B Instruct V0.6 GGUF | 0K / 4.5 GB | 13522 | 0 |
Qwen2 7B Instruct V0.1 GGUF | 0K / 4.5 GB | 9714 | 0 |
Qwen2 7B Instruct V0.7 GGUF | 0K / 4.5 GB | 9530 | 0 |
Qwen2 7B Instruct V0.3 GGUF | 0K / 4.5 GB | 8911 | 1 |
Qwen2 7B Instruct V0.2 GGUF | 0K / 4.5 GB | 7986 | 0 |
Qwen2 7B Instruct V0.8 GGUF | 0K / 4.5 GB | 6155 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐