Model Type |
| ||||||
Training Details |
|
LLM Name | POC NEW Meta Llama 3 8B MEDAL Flash Attention 2 Cosine Evaldata |
Repository ๐ค | https://huggingface.co/frankmorales2020/POC-NEW-Meta-Llama-3-8B-MEDAL-flash-attention-2-cosine-evaldata |
Base Model(s) | |
Model Size | 8b |
Required VRAM | 0.7 GB |
Updated | 2025-02-22 |
Maintainer | frankmorales2020 |
Model Files | |
Model Architecture | Adapter |
License | llama3 |
Is Biased | none |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | [PAD] |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | v_proj|q_proj|k_proj|down_proj|o_proj|up_proj|gate_proj |
LoRA Alpha | 64 |
LoRA Dropout | 0.05 |
R Param | 128 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Meta Llama 3 8B Lora | 0 | 8 | 16 GB |
Meta Llama 3 8B GPTQ | 0 | 75 | 5 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
... 3 8B Instruct Bvr Finetune V3 | 8K / 16.1 GB | 5 | 0 |
Flippa V6 | 0K / 0 GB | 9 | 1 |
Llama 3 Korean 8B R V 0.1 | 0K / 0 GB | 7 | 0 |
...a7 4262 4abb 97b1 1879f340d32e | 0K / 0.3 GB | 22 | 0 |
Llama 3.1 8B Smart Lora | 0K / 0.2 GB | 0 | 1 |
...ultiModal Llama 3 8B Finetuned | 0K / 0 GB | 15 | 1 |
...lama 3 1 8B Instruct Orca ORPO | 0K / 0.1 GB | 14 | 2 |
FatDPOv2LoRA | 0K / 0.8 GB | 4 | 1 |
Adapter Test | 0K / 0.1 GB | 6 | 0 |
Vortex2 | 0K / 4.4 GB | 8 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐