LLM Name | Vega 1 6B |
Repository ๐ค | https://huggingface.co/ElMater06/Vega-1-6B |
Base Model(s) | |
Model Size | 6b |
Required VRAM | 6.6 GB |
Updated | 2024-12-28 |
Maintainer | ElMater06 |
Model Files | |
GGML Quantization | Yes |
GGUF Quantization | Yes |
Quantization Type | ggml|gguf |
Model Architecture | Adapter |
Model Max Length | 2048 |
Is Biased | none |
Tokenizer Class | GPT2Tokenizer |
Padding Token | <|endoftext|> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | v_proj|o_proj|up_proj|gate_proj|k_proj|q_proj |
LoRA Alpha | 8 |
LoRA Dropout | 0.1 |
R Param | 16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Yi 1.5 6B Chat Sa V0.1 | 0K / 0 GB | 5 | 0 |
01 Ai Yi 1.5 6B 1719335236 | 0K / 0.4 GB | 5 | 0 |
01 Ai Yi 1.5 6B 1719098372 | 0K / 0.4 GB | 9 | 0 |
01 Ai Yi 1.5 6B 1718986516 | 0K / 0.4 GB | 6 | 0 |
Dreamtobenlpsama Mnlp M2 | 0K / 0 GB | 5 | 0 |
Yi 6B Yoruno Peft | 0K / 0.6 GB | 3 | 1 |
Trl Rm Tldr Gptj | 0K / 0 GB | 160 | 1 |
Yi 6b Chat Medical Qa Full | 0K / 0 GB | 3 | 1 |
Yi 6B Chat Finance Qa | 0K / 0 GB | 3 | 1 |
...i 6b Chat Medical Qa Full Beta | 0K / 0 GB | 3 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐