MixTAO 7Bx2 MoE Instruct V7.0 GGUF by MaziyarPanahi

 ยป  All LLMs  ยป  MaziyarPanahi  ยป  MixTAO 7Bx2 MoE Instruct V7.0 GGUF   URL Share it on

  2-bit   3-bit   4-bit   5-bit   6-bit   8-bit   Autotrain compatible Base model:zhengr/mixtao-7bx2-...   Endpoints compatible   Gguf   Has space   Instruct   License:apache-2.0   Mistral   Mixtral   Moe   Quantized   Region:us   Safetensors

MixTAO 7Bx2 MoE Instruct V7.0 GGUF Benchmarks

Rank the MixTAO 7Bx2 MoE Instruct V7.0 GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
MixTAO 7Bx2 MoE Instruct V7.0 GGUF (MaziyarPanahi/MixTAO-7Bx2-MoE-Instruct-v7.0-GGUF)

Best Alternatives to MixTAO 7Bx2 MoE Instruct V7.0 GGUF

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...Top 5x7B Instruct S5 V0.1 GGUF65.90K / 2.7 GB1711
...eTop 5x7B Instruct T V0.1 GGUF65.70K / 2.7 GB1790
...Top 5x7B Instruct S4 V0.1 GGUF65.70K / 2.7 GB1730
...eTop 5x7B Instruct D V0.1 GGUF65.40K / 2.7 GB1750
Sakura SOLAR Instruct GGUF65.20K / 4.5 GB3145
...rautLM UNA SOLAR Instruct GGUF65.10K / 4.5 GB47712
...uerkrautLM SOLAR Instruct GGUF65.10K / 4.5 GB2962
...Top 5x7B Instruct S3 V0.1 GGUF64.90K / 2.7 GB1650
...UNA SOLARkrautLM Instruct GGUF64.70K / 4.5 GB3154
...xtral Instruct 8x7b Zloss GGUF64.20K / 17.2 GB86718

MixTAO 7Bx2 MoE Instruct V7.0 GGUF Parameters and Internals

LLM NameMixTAO 7Bx2 MoE Instruct V7.0 GGUF
RepositoryOpen on ๐Ÿค— 
Model NameMixTAO-7Bx2-MoE-Instruct-v7.0-GGUF
Model Creatorzhengr
Base Model(s)  MixTAO 7Bx2 MoE Instruct V7.0   zhengr/MixTAO-7Bx2-MoE-Instruct-v7.0
Required VRAM4.8 GB
Updated2024-07-01
MaintainerMaziyarPanahi
Model Typemistral
Instruction-BasedYes
Model Files  4.8 GB   6.7 GB   6.2 GB   5.6 GB   7.8 GB   7.3 GB   9.1 GB   8.9 GB   10.6 GB   13.7 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel

What open-source LLMs or SLMs are you in search of? 34238 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801