LLaVA Meta Llama 3 8B Instruct Lora by MBZUAI

 ยป  All LLMs  ยป  MBZUAI  ยป  LLaVA Meta Llama 3 8B Instruct Lora   URL Share it on

  Autotrain compatible   Endpoints compatible   Instruct   Llava llama   Lora   Region:us   Safetensors

LLaVA Meta Llama 3 8B Instruct Lora Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
LLaVA Meta Llama 3 8B Instruct Lora (MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-lora)

LLaVA Meta Llama 3 8B Instruct Lora Parameters and Internals

Model Type 
Multimodal
Additional Notes 
The repository contains projector and LoRA weights.
Training Details 
Data Sources:
LCS-558K, LLaVA-Instruct-665K
Methodology:
Pretraining involves training only the Vision-to-Language projector while keeping the rest of the model frozen. Fine-tuning applies LoRA to the LLM, keeping the vision-backbone (CLIP) frozen.
LLM NameLLaVA Meta Llama 3 8B Instruct Lora
Repository ๐Ÿค—https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-lora 
Model Size8b
Required VRAM0.7 GB
Updated2024-12-22
MaintainerMBZUAI
Instruction-BasedYes
Model Files  0.7 GB   0.0 GB
Model ArchitectureAutoModelForCausalLM
Is Biasednone
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|down_proj|k_proj|up_proj|gate_proj|o_proj|v_proj
LoRA Alpha256
LoRA Dropout0.05
R Param128

Best Alternatives to LLaVA Meta Llama 3 8B Instruct Lora

Best Alternatives
Context / RAM
Downloads
Likes
500tiao 100lun0K / 0.2 GB230
Autotrain Pvqlj Odah20K / 0.2 GB180
Codelica0K / 16.1 GB1110
ModeliCo 8B0K / 16.1 GB172
Llama31 Eros0K / 16.1 GB271
Llama 3.1 8b Prop Logic Ft0K / 0.2 GB614
...Instruct Persian Finetuned Sft0K / 0 GB525
Meta Llama 3.1 8B Instruct OAS0K / 16.1 GB2151
... 3 Instruct Bellman 8B Swedish0K / 0 GB872
Llama 3 8B Instruct Spider0K / 16.1 GB180
Note: green Score (e.g. "73.2") means that the model is better than MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-lora.

Rank the LLaVA Meta Llama 3 8B Instruct Lora Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217