๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Candle Llava V1.6 Mistral 7B | — | 32K / 15.1 GB | 290 | 1 |
Llava V1.6 Mistral 7B Bnb 4bit | — | 32K / 4.6 GB | 17 | 0 |
Llava Mistral 7B Tokenizer | — | 32K / GB | 19 | 2 |
Llava Maid 7B DPO | — | 32K / 14.5 GB | 2 | 2 |
BakLLaVA 1 | — | 32K / 15.1 GB | 37 | 370 |
Llava V1.6 Mistral 7B | — | 32K / 15.1 GB | 97711 | 205 |
LLaVA NeXT Video 7B 32K | — | 32K / 15.1 GB | 10756 | 7 |
Llava Mistral 7B Finetuned | — | 32K / 15.1 GB | 3 | 2 |
Llava Model | — | 32K / 15.1 GB | 147 | 0 |
... 1.6 7B Light Custom No Aug T1 | — | 32K / 15.1 GB | 8 | 0 |
LLM Name | Llava NousResearch Nous Hermes 2 Vision GGUF |
Repository | Open on ๐ค |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 15.3 GB |
Updated | 2024-07-05 |
Maintainer | billborkowski |
Model Type | llava_mistral |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|q4 |
Model Architecture | LlavaMistralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.34.1 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <unk> |
Vocabulary Size | 32002 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |