๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Mixtral 8x7B Instruct V0.1 | 68.87 | 0K / 0.1 GB | 11 | 9 |
Mixtral 8x7b MonsterInstruct | 66.34 | 0K / 1 GB | 2 | 1 |
Alpaca13B Lora | — | 0K / 0 GB | 0 | 33 |
Dolly Lora | — | 0K / 0 GB | 0 | 25 |
Aurora | — | 0K / 0 GB | 0 | 20 |
Gpt4all J Lora | — | 0K / 0 GB | 0 | 18 |
Alpaca | — | 0K / 0 GB | 0 | 14 |
WizardLM LlaMA LoRA 13 | — | 0K / 0 GB | 0 | 13 |
Alpaca7B Lora | — | 0K / 0 GB | 0 | 8 |
Alpaca Lora German | — | 0K / 0 GB | 0 | 8 |
LLM Name | Trrapi 16 |
Repository | Open on ๐ค |
Base Model(s) | |
Merged Model | Yes |
Required VRAM | 0 GB |
Updated | 2024-06-24 |
Maintainer | rizla |
Model Files | |
Model Architecture | Adapter |
License | apache-2.0 |
Is Biased | none |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | w3|q_proj|w1|o_proj|w2|k_proj|v_proj|gate |
LoRA Alpha | 16 |
LoRA Dropout | 0.05 |
R Param | 32 |