LLM Name | Results |
Repository ๐ค | https://huggingface.co/farshadafx/results |
Base Model(s) | |
Required VRAM | 0 GB |
Updated | 2024-11-07 |
Maintainer | farshadafx |
Instruction-Based | Yes |
Model Files | |
Model Architecture | Adapter |
License | mit |
Model Max Length | 131072 |
Is Biased | none |
Tokenizer Class | LlamaTokenizer |
Padding Token | <|endoftext|> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | gate_proj|v_proj|down_proj|o_proj|k_proj|up_proj|q_proj |
LoRA Alpha | 32 |
LoRA Dropout | 0.05 |
R Param | 16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Phi 3 Mini 4K Instruct Sa V0.1 | 0K / 0 GB | 9 | 0 |
Mistral Small Fujin Qlora | 0K / 0.8 GB | 15 | 1 |
Test | 0K / 0 GB | 5 | 0 |
Vfgf | 0K / 0 GB | 6 | 0 |
Phi 3 Mini QLoRA | 0K / 0 GB | 180 | 0 |
Results Phi3 Medium 4k | 0K / 0.1 GB | 5 | 0 |
Results | 0K / 0.1 GB | 5 | 0 |
Phi3AdapterModel | 0K / 0.1 GB | 5 | 0 |
Phi 3 Mini 4K Instruct Sft CoT | 0K / 0.1 GB | 5 | 0 |
Phi 3 Mini 4K Instruct Sft | 0K / 0 GB | 6 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐