LLM Name | Adapter Test |
Repository ๐ค | https://huggingface.co/khKim/adapter_test |
Base Model(s) | |
Model Size | 8b |
Required VRAM | 0.1 GB |
Updated | 2024-08-16 |
Maintainer | khKim |
Model Files | |
Model Architecture | Adapter |
License | apache-2.0 |
Is Biased | none |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <|reserved_special_token_250|> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | q_proj|v_proj |
LoRA Alpha | 128 |
LoRA Dropout | 0.1 |
R Param | 128 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
... 3 8B Instruct Bvr Finetune V3 | 8K / 16.1 GB | 5 | 0 |
FatDPOv2LoRA | 0K / 0.8 GB | 3 | 1 |
Llama3 C1 Full | 0K / 0.2 GB | 13 | 0 |
Vortex2 | 0K / 4.4 GB | 8 | 0 |
Fireball 3.1 8B ORPO | 0K / 16.1 GB | 6 | 2 |
FineLlama3.1 8B Instruct Lora | 0K / 0.2 GB | 0 | 1 |
Llama3 8B Instruct Code | 0K / 0.2 GB | 9 | 1 |
Llama 3 8B Claudstruct V3 | 0K / 0.1 GB | 9 | 0 |
Llama 3 8B Instruct 80K QLoRA | 0K / 2.2 GB | 0 | 24 |
Llama 3 8B Claudstruct V1 | 0K / 0.1 GB | 6 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐