LLM Name | Cap Iaa Lora |
Repository ๐ค | https://huggingface.co/kkeezz/cap-iaa-lora |
Required VRAM | 0.2 GB |
Updated | 2024-11-15 |
Maintainer | kkeezz |
Model Files | |
Model Architecture | Adapter |
Is Biased | none |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | model.layers.10.self_attn.v_proj.multiway.1|model.layers.7.self_attn.v_proj.multiway.1|model.layers.19.self_attn.q_proj|model.layers.21.self_attn.q_proj|model.layers.4.self_attn.v_proj.multiway.1|model.layers.6.self_attn.v_proj.multiway.1|model.layers.25.self_attn.q_proj|model.layers.13.self_attn.v_proj.multiway.1|model.layers.29.self_attn.q_proj|model.layers.30.self_attn.v_proj.multiway.1|model.layers.18.self_attn.v_proj.multiway.1|model.layers.0.self_attn.v_proj.multiway.1|model.layers.12.self_attn.v_proj.multiway.1|model.layers.16.self_attn.q_proj|model.layers.2.self_attn.v_proj.multiway.1|model.layers.27.self_attn.v_proj.multiway.1|model.layers.5.self_attn.q_proj|model.layers.22.self_attn.v_proj.multiway.1|model.layers.1.self_attn.q_proj|model.layers.17.self_attn.v_proj.multiway.1|model.layers.12.self_attn.q_proj|model.layers.18.self_attn.q_proj|model.layers.31.self_attn.q_proj|model.layers.10.self_attn.q_proj|model.layers.14.self_attn.q_proj|model.layers.11.self_attn.q_proj|model.layers.27.self_attn.q_proj|model.layers.6.self_attn.q_proj|model.layers.9.self_attn.v_proj.multiway.1|model.layers.8.self_attn.v_proj.multiway.1|model.layers.26.self_attn.v_proj.multiway.1|model.layers.23.self_attn.v_proj.multiway.1|model.layers.20.self_attn.v_proj.multiway.1|model.layers.30.self_attn.q_proj|model.layers.15.self_attn.v_proj.multiway.1|model.layers.9.self_attn.q_proj|model.layers.11.self_attn.v_proj.multiway.1|model.layers.23.self_attn.q_proj|model.layers.1.self_attn.v_proj.multiway.1|model.layers.16.self_attn.v_proj.multiway.1|model.layers.21.self_attn.v_proj.multiway.1|model.layers.5.self_attn.v_proj.multiway.1|model.layers.3.self_attn.q_proj|model.layers.14.self_attn.v_proj.multiway.1|model.layers.25.self_attn.v_proj.multiway.1|model.layers.15.self_attn.q_proj|model.layers.17.self_attn.q_proj|model.layers.28.self_attn.v_proj.multiway.1|model.layers.13.self_attn.q_proj|model.layers.20.self_attn.q_proj|model.layers.24.self_attn.q_proj|model.layers.19.self_attn.v_proj.multiway.1|model.layers.26.self_attn.q_proj|model.layers.28.self_attn.q_proj|model.layers.31.self_attn.v_proj.multiway.1|model.layers.24.self_attn.v_proj.multiway.1|model.layers.7.self_attn.q_proj|model.layers.22.self_attn.q_proj|model.layers.2.self_attn.q_proj|model.layers.8.self_attn.q_proj|model.layers.0.self_attn.q_proj|model.layers.29.self_attn.v_proj.multiway.1|model.layers.3.self_attn.v_proj.multiway.1|model.layers.4.self_attn.q_proj |
LoRA Alpha | 256 |
LoRA Dropout | 0.05 |
R Param | 128 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Phi 3 Mini 4K Instruct Sa V0.1 | 0K / 0 GB | 8 | 0 |
Samantha Omni Humanlike Lora | 0K / 0 GB | 56 | 3 |
...is Violet Toxic GRPO V0.4 Lora | 0K / 0.5 GB | 12 | 0 |
Reflection Model | 0K / 0.2 GB | 0 | 1 |
SpectraMind | 0K / 16.1 GB | 120 | 3 |
...mall Physics Finetuned Adapter | 0K / 0.1 GB | 8 | 1 |
SpectraMindQ | 0K / 0.2 GB | 8 | 1 |
L3.1 Spark R64 LoRA | 0K / 0.4 GB | 8 | 0 |
Mistral Small Fujin Qlora | 0K / 0.8 GB | 47 | 2 |
Mistral Small Dampf Qlora | 0K / 0.8 GB | 19 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐