LLM Name | Lora Sft Finetuned Stage4 Iter86000 |
Repository ๐ค | https://huggingface.co/GENIAC-Team-Ozaki/lora-sft-finetuned-stage4-iter86000 |
Model Size | 11.5b |
Required VRAM | 23.2 GB |
Updated | 2025-02-17 |
Maintainer | GENIAC-Team-Ozaki |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.40.0 |
Tokenizer Class | T5Tokenizer |
Padding Token | <pad> |
Vocabulary Size | 50816 |
LoRA Model | Yes |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llama 3 Monika Ddlc 11.5B V1 | 8K / 23 GB | 35 | 3 |
Llama 3 11.5B V2 | 8K / 23 GB | 36 | 41 |
Llama 3 11.5B V0.1 | 8K / 23 GB | 21 | 1 |
Llama 3 11.5B Instruct V2 | 8K / 23 GB | 7 | 6 |
Mermaid 11.5B | 4K / 23.2 GB | 4 | 1 |
...ena Finetuned Stage2 Iter40000 | 2K / 23.2 GB | 5 | 0 |
L3 11.5B DuS MoonRoot Bnb 4bit | 8K / 7.5 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐