๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Mistral Finetuned DialogSumm | 47.9 | 0K / 0 GB | 7 | 1 |
...ral 7b WizardLMEvolInstruct70k | — | 0K / 0 GB | 1 | 11 |
Thai Buffala Lora 7B V0.1 | — | 0K / 0 GB | 7 | 10 |
Falcon 7B QLoRA Alpaca Arabic | — | 0K / 0 GB | 14 | 7 |
Mistral Finetuned Samsum | — | 0K / 0 GB | 0 | 4 |
Falcon7b Fine Tuned Therapy | — | 0K / 0 GB | 0 | 2 |
...pha RakutenAI 7B Instruct Lora | — | 0K / 0 GB | 0 | 2 |
Mistraldog | — | 0K / 0 GB | 23 | 1 |
OpenHathi 7B FT V0.1 SI | — | 0K / 0 GB | 6 | 1 |
...uct Gpt4all Max Length 3072 V1 | — | 0K / 0 GB | 1 | 1 |
LLM Name | Mistral 7B Text To Sql Without Flash Attention 2 |
Repository | Open on ๐ค |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 1.9 GB |
Updated | 2024-07-05 |
Maintainer | frankmorales2020 |
Instruction-Based | Yes |
Model Files | |
Model Architecture | Adapter |
License | apache-2.0 |
Is Biased | none |
Tokenizer Class | LlamaTokenizer |
Padding Token | <|im_end|> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | down_proj|v_proj|gate_proj|up_proj|k_proj|q_proj|o_proj |
LoRA Alpha | 128 |
LoRA Dropout | 0.05 |
R Param | 256 |