LLM Name | Mixtral 8x7B V0.1 4bit Bnb |
Repository ๐ค | https://huggingface.co/monsterapi/Mixtral-8x7B-v0.1_4bit_bnb |
Model Size | 24.2b |
Required VRAM | 24.5 GB |
Updated | 2025-02-22 |
Maintainer | monsterapi |
Model Type | mixtral |
Model Files | |
Quantization Type | 4bit |
Model Architecture | MixtralForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.40.2 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...al 8x7B Instruct V0.1 4bit Bnb | 32K / 24.5 GB | 8 | 1 |
...ixtral 8x7B Instruct V0.1 4bit | 32K / 24.5 GB | 54 | 2 |
...al 8x7B Instruct V0.1 Bnb 4bit | 32K / 26.7 GB | 196 | 59 |
Dzakwan MoE 4x7b Beta | 32K / 48.4 GB | 3844 | 0 |
Beyonder 4x7B V3 | 32K / 48.3 GB | 3941 | 58 |
Calme 4x7B MoE V0.2 | 32K / 48.3 GB | 5636 | 2 |
Proto Athena 4x7B | 32K / 48.4 GB | 15 | 0 |
Proto Athena V0.2 4x7B | 32K / 48.4 GB | 8 | 0 |
Mera Mix 4x7B | 32K / 48.3 GB | 3525 | 18 |
Calme 4x7B MoE V0.1 | 32K / 48.3 GB | 3951 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐