๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Mixtral 8x7B Instruct V0.1 | 77.75 | 32K / 93.6 GB | 529186 | 3945 |
...lQA Mixtral 8x7B Instruct V0.1 | — | 32K / 43.3 GB | 5 | 2 |
Mixtral 8x7B Instruct V0.1 FP8 | — | 32K / 47.1 GB | 237 | 1 |
...tral 8x7B Instruct V0.1 FP8 V3 | — | 32K / 47.1 GB | 36 | 0 |
...tral 8x7B Instruct V0.1 FP8 V1 | — | 32K / 47.1 GB | 6 | 0 |
Mixtral Instruct ITR 8x7B | — | 32K / 91.4 GB | 1 | 1 |
Maid Yuzu V8 Alter | — | 32K / 91.7 GB | 1 | 2 |
Merge Mixtral Prometheus 8x7B | — | 32K / 91.9 GB | 64 | 1 |
...ELT Mixtral 8x7B Instruct V0.1 | — | 32K / 92 GB | 2 | 3 |
Yk 8x7b Model V1 | — | 32K / 92 GB | 309 | 0 |
LLM Name | Mixtral 8x7B Instruct V0.1 FP8 V2 |
Repository | Open on ๐ค |
Model Size | 46.7b |
Required VRAM | 47.1 GB |
Updated | 2024-07-07 |
Maintainer | comaniac |
Model Type | mixtral |
Instruction-Based | Yes |
Model Files | |
Model Architecture | MixtralForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.40.2 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |