Internlm Xcomposer2 7B by internlm

 ยป  All LLMs  ยป  internlm  ยป  Internlm Xcomposer2 7B   URL Share it on

  Arxiv:2401.16420   Custom code   Feature-extraction   Internlmxcomposer2   Pytorch   Region:us

Internlm Xcomposer2 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Internlm Xcomposer2 7B (internlm/internlm-xcomposer2-7b)

Internlm Xcomposer2 7B Parameters and Internals

Model Type 
vision-language large model, text-image comprehension and composition
Use Cases 
Areas:
text-image composition, multimodal benchmarks
Primary Use Cases:
advanced text-image comprehension and composition
Supported Languages 
en (high), zh (high)
Input Output 
Input Format:
images in RGB format and text prompts
Accepted Modalities:
text, image
Output Format:
Generated text based on input images and prompts
Performance Tips:
Use appropriate data types and manage memory to avoid OOM errors.
LLM NameInternlm Xcomposer2 7B
Repository ๐Ÿค—https://huggingface.co/internlm/internlm-xcomposer2-7b 
Model Size7b
Required VRAM17.3 GB
Updated2025-02-22
Maintainerinternlm
Model Typeinternlmxcomposer2
Model Files  17.3 GB
Model ArchitectureInternLMXComposer2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.33.1
Is Biased0
Tokenizer ClassInternLMXComposer2Tokenizer
Padding Token</s>
Vocabulary Size92544
Torch Data Typebfloat16

Quantized Models of the Internlm Xcomposer2 7B

Model
Likes
Downloads
VRAM
Internlm Xcomposer2 7B 4bit0337 GB
Internlm Xcomposer2 7B 4bit10517 GB

Best Alternatives to Internlm Xcomposer2 7B

Best Alternatives
Context / RAM
Downloads
Likes
Internlm Xcomposer2 Vl 7B32K / 17.3 GB123980
Internlm Xcomposer2 Vl 7B 4bit32K / 7 GB191127
Internlm Xcomposer2 7B 4bit32K / 7 GB330
Internlm Xcomposer2 7B 4bit32K / 7 GB5110
Note: green Score (e.g. "73.2") means that the model is better than internlm/internlm-xcomposer2-7b.

Rank the Internlm Xcomposer2 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227