Model Type |
| ||||||||||
Use Cases |
| ||||||||||
Additional Notes |
| ||||||||||
Supported Languages |
| ||||||||||
Training Details |
| ||||||||||
Input Output |
| ||||||||||
Release Notes |
|
LLM Name | Bumblebee Light |
Repository ๐ค | https://huggingface.co/MonolithFoundation/Bumblebee-Light |
Model Size | 7b |
Required VRAM | 18.6 GB |
Updated | 2025-02-22 |
Maintainer | MonolithFoundation |
Model Type | llava_llama |
Model Files | |
Model Architecture | LlavaQwen2ForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.39.3 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 151646 |
Torch Data Type | float16 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
HuatuoGPT Vision 7B | 128K / 15.9 GB | 1900 | 15 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐