Model Type |
| |||||||||
Use Cases |
| |||||||||
Supported Languages |
| |||||||||
Training Details |
|
LLM Name | Llava Jp 1.3B V1.1 Llava Jp Instruct 108K |
Repository ๐ค | https://huggingface.co/toshi456/llava-jp-1.3b-v1.1-llava-jp-instruct-108k |
Model Size | 1.3b |
Required VRAM | 6.6 GB |
Updated | 2025-02-22 |
Maintainer | toshi456 |
Model Type | llava-jp |
Instruction-Based | Yes |
Model Files | |
Supported Languages | ja |
Model Architecture | LlavaGpt2ForCausalLM |
License | apache-2.0 |
Model Max Length | 1532 |
Transformers Version | 4.38.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <unk|LLM-jp> |
Vocabulary Size | 50688 |
Torch Data Type | float32 |
Activation Function | gelu |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llava Jp 1.3B V1.1 | 0K / 6.6 GB | 705 | 11 |
ConvLLaVA JP 1.3B 1280 | 0K / 7.1 GB | 19 | 1 |
ConvLLaVA JP 1.3B 768 | 0K / 7.1 GB | 18 | 2 |
...V1.0 Siglip So400m Patch14 384 | 0K / 6.6 GB | 65 | 0 |
Llava Jp 1.3B V1.0 | 0K / 6.3 GB | 73 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐