Model Type | multimodal, chatbot, transformer |
|
Use Cases |
Areas: | |
Applications: | large multimodal models and chatbots |
|
Primary Use Cases: | Research on large multimodal models |
|
|
Additional Notes | The model is an open-source chatbot trained by fine-tuning LLaMA/Vicuna/MPT on GPT-generated multimodal instruction-following data. It should comply with MPT-7B-chat license and agreements. |
|
Training Details |
Data Sources: | |
Data Volume: | 558K filtered image-text pairs, 80K GPT-generated multimodal instruction-following data |
|
Methodology: | Fine-tuning LLaMA/Vicuna/MPT on GPT-generated multimodal instruction-following data |
|
Training Time: | |
Model Architecture: | |
|