Model Type |
| |||||||||||||
Supported Languages |
| |||||||||||||
Training Details |
| |||||||||||||
Input Output |
| |||||||||||||
Release Notes |
|
LLM Name | Saiga2 7b Lora |
Repository ๐ค | https://huggingface.co/IlyaGusev/saiga2_7b_lora |
Model Size | 7b |
Required VRAM | 0.1 GB |
Updated | 2025-02-22 |
Maintainer | IlyaGusev |
Instruction-Based | Yes |
Model Files | |
Supported Languages | ru |
Model Architecture | Adapter |
License | cc-by-4.0 |
Model Max Length | 4096 |
Is Biased | none |
Tokenizer Class | LlamaTokenizer |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | q_proj|v_proj|k_proj|o_proj |
LoRA Alpha | 16 |
LoRA Dropout | 0.05 |
R Param | 16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Llama 2 7B Fp16 | 44 | 8394 | 13 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen Megumin | 0K / 0.1 GB | 13 | 0 |
Deepthink Reasoning Adapter | 0K / 0.2 GB | 27 | 8 |
Mistral 7B Instruct Sa V0.1 | 0K / 0 GB | 6 | 0 |
...Sql Flash Attention 2 Dataeval | 0K / 1.9 GB | 32 | 3 |
...82 6142 45d8 9455 Bc68ca4866eb | 0K / 1.2 GB | 6 | 0 |
Text To Rule Mistral 2 | 0K / 0.3 GB | 6 | 0 |
...al 7B Instruct V0.3 1719301256 | 0K / 0.9 GB | 9 | 0 |
...al 7B Instruct V0.3 1719297750 | 0K / 0.4 GB | 6 | 0 |
Text To Rule Mistral | 0K / 0.4 GB | 6 | 0 |
...al 7B Instruct V0.3 1719341325 | 0K / 1.7 GB | 6 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐