Model Type |
| |||||||||
Use Cases |
| |||||||||
Supported Languages |
| |||||||||
Training Details |
| |||||||||
Input Output |
|
LLM Name | Saiga2 70b Lora |
Repository ๐ค | https://huggingface.co/IlyaGusev/saiga2_70b_lora |
Model Size | 70b |
Required VRAM | 0.3 GB |
Updated | 2024-12-22 |
Maintainer | IlyaGusev |
Instruction-Based | Yes |
Model Files | |
Supported Languages | ru |
Model Architecture | Adapter |
License | cc-by-4.0 |
Model Max Length | 4096 |
Is Biased | none |
Tokenizer Class | LlamaTokenizer |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | q_proj|v_proj|k_proj|o_proj |
LoRA Alpha | 16 |
LoRA Dropout | 0.05 |
R Param | 16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Saiga2 70b Lora GPTQ | 2 | 17 | 36 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama 3 70B Instruct Spider | 0K / 141.9 GB | 6 | 0 |
Llama3v1 | 0K / 0.1 GB | 5 | 0 |
LLaMA 2 Wizard 70B QLoRA | 0K / 1.7 GB | 0 | 4 |
Llama 2 70B Instruct V0.1 | 0K / 1.1 GB | 68 | 14 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐