LLM Name | Smarts Llama3 |
Repository ๐ค | https://huggingface.co/ResplendentAI/Smarts_Llama3 |
Required VRAM | 0.7 GB |
Updated | 2025-01-12 |
Maintainer | ResplendentAI |
Model Files | |
Model Architecture | Adapter |
Is Biased | none |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | gate_proj|o_proj|up_proj|k_proj|v_proj|down_proj|q_proj |
LoRA Alpha | 64 |
LoRA Dropout | 0 |
R Param | 64 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Phi 3 Mini 4K Instruct Sa V0.1 | 0K / 0 GB | 8 | 0 |
Reflection Model | 0K / 0.2 GB | 0 | 1 |
SpectraMind | 0K / 16.1 GB | 82 | 3 |
...mall Physics Finetuned Adapter | 0K / 0.1 GB | 18 | 1 |
SpectraMindQ | 0K / 0.2 GB | 13 | 1 |
L3.1 Spark R64 LoRA | 0K / 0.4 GB | 41 | 0 |
Mistral Small Fujin Qlora | 0K / 0.8 GB | 45 | 2 |
Mistral Small Dampf Qlora | 0K / 0.8 GB | 17 | 0 |
...stral Small Springdragon Qlora | 0K / 0.8 GB | 5 | 1 |
Zephyr Phi 1 5 Sft Qlora | 0K / 0 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐