Model Type |
| |
Additional Notes |
|
LLM Name | Mistral Numericnlg FV |
Repository ๐ค | https://huggingface.co/moetezsa/mistral_numericnlg_FV |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 0.3 GB |
Updated | 2025-01-27 |
Maintainer | moetezsa |
Instruction-Based | Yes |
Model Files | |
Quantization Type | 4bit |
Model Architecture | Adapter |
License | apache-2.0 |
Model Max Length | 32768 |
Is Biased | none |
Tokenizer Class | LlamaTokenizer |
Padding Token | <unk> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | o_proj|k_proj|v_proj|q_proj|gate_proj|down_proj|up_proj |
LoRA Alpha | 16 |
LoRA Dropout | 0 |
R Param | 32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Mistral Wikitable FV | 0K / 0.3 GB | 5 | 0 |
Mistral Charttotext FV | 0K / 0.3 GB | 5 | 0 |
Zephyr 7B Beta Agent Instruct | 0K / 0.3 GB | 11 | 1 |
Falcon 7B Instruct 4bit Lora | 0K / 0 GB | 0 | 1 |
Lemonilia ShoriRP V0.75d | 0K / 0.2 GB | 4 | 1 |
Qwen Megumin | 0K / 0.1 GB | 13 | 0 |
Deepthink Reasoning Adapter | 0K / 0.2 GB | 39 | 8 |
Mistral Finetuned Bookcorpus | 0K / 0 GB | 50 | 0 |
Mistral 7B Instruct Sa V0.1 | 0K / 0 GB | 6 | 0 |
Mistral 7B V0.1 Emotion | 0K / 1.3 GB | 170 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐