๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Mixtral 8x7B Instruct V0.1 | 68.87 | 0K / 0.1 GB | 19 | 9 |
WizardLM LlaMA LoRA 13 | — | 0K / 0 GB | 0 | 13 |
Gigasaiga Lora | — | 0K / 0 GB | 0 | 7 |
Bloomz 7b1 Instruct | — | 0K / 0 GB | 0 | 4 |
...m 6b4 Clp German Instruct Lora | — | 0K / 0 GB | 0 | 2 |
Phi 3 Mini LoRA | — | 0K / 0 GB | 23 | 1 |
Phi 3 Medical Instruct | — | 0K / 0 GB | 2 | 1 |
... Clp German Instruct Lora Peft | — | 0K / 0 GB | 0 | 1 |
GeoV Instruct LoRA | — | 0K / 0 GB | 0 | 1 |
...zardLM LlaMA LoRA 13bbbaaaaddd | — | 0K / 0 GB | 0 | 1 |
LLM Name | Caramelinho |
Repository | Open on ๐ค |
Required VRAM | 0 GB |
Updated | 2024-07-01 |
Maintainer | Bruno |
Instruction-Based | Yes |
Model Files | |
Supported Languages | pt en |
Model Architecture | Adapter |
Model Max Length | 2048 |
Is Biased | none |
Tokenizer Class | PreTrainedTokenizerFast |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | query_key_value|dense|dense_h_to_4h|dense_4h_to_h |
LoRA Alpha | 16 |
LoRA Dropout | 0.1 |
R Param | 64 |