LLM Name | Mistral Small Physics Finetuned Adapter |
Repository ๐ค | https://huggingface.co/benhaotang/mistral-small-physics-finetuned-adapter |
Base Model(s) | |
Required VRAM | 0.1 GB |
Updated | 2025-02-22 |
Maintainer | benhaotang |
Instruction-Based | Yes |
Model Files | |
Model Architecture | Adapter |
Is Biased | none |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | k_proj|o_proj|q_proj|v_proj |
LoRA Alpha | 16 |
LoRA Dropout | 0.05 |
R Param | 8 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Phi 3 Mini 4K Instruct Sa V0.1 | 0K / 0 GB | 8 | 0 |
Mistral Small Fujin Qlora | 0K / 0.8 GB | 47 | 2 |
Mistral Small Dampf Qlora | 0K / 0.8 GB | 19 | 0 |
Test | 0K / 0 GB | 5 | 0 |
Vfgf | 0K / 0 GB | 6 | 0 |
Results | 0K / 0 GB | 6 | 0 |
Results Phi3 Medium 4k | 0K / 0.1 GB | 5 | 0 |
Results | 0K / 0.1 GB | 6 | 0 |
Phi3AdapterModel | 0K / 0.1 GB | 12 | 0 |
Phi 3 Mini QLoRA | 0K / 0 GB | 189 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐