LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Synthia MoE V3 Mixtral 8x7B 3.5bpw H6 EXL2 2 by LoneStriker

What open-source LLMs or SLMs are you in search of? 18870 in total.

  Autotrain compatible   Endpoints compatible   Exl2   License:apache-2.0   Mixtral   Moe   Pytorch   Quantized   Region:us   Sharded   Tensorflow

Rank the Synthia MoE V3 Mixtral 8x7B 3.5bpw H6 EXL2 2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Synthia MoE V3 Mixtral 8x7B 3.5bpw H6 EXL2 2 (LoneStriker/Synthia-MoE-v3-Mixtral-8x7B-3.5bpw-h6-exl2-2)

Best Alternatives to Synthia MoE V3 Mixtral 8x7B 3.5bpw H6 EXL2 2

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Saulgoodman 2x7b Alpha169.4332K / 25.8 GB30420
...es Mixtral 8x7B 2.4bpw H6 EXL263.732K / 14.3 GB22
...es Mixtral 8x7B 3.0bpw H6 EXL263.732K / 17.8 GB41
Notux 8x7b V1.3.5bpw EXL263.732K / 20.7 GB62
Notux 8x7b V1.3.5bpw H6 EXL263.732K / 20.7 GB31
...es Mixtral 8x7B 6.0bpw H6 EXL263.732K / 35.3 GB21
...Hermes 2 Mixtral 8x7B DPO 4bit6032K / 10.6 GB12815
...ixtral 8x7B DPO 2.4bpw H6 EXL26032K / 14.3 GB61
...ralRPChat ZLoss 3.0bpw H6 EXL26032K / 17.8 GB21
...ralRPChat ZLoss 3.5bpw H6 EXL26032K / 20.7 GB94
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/Synthia-MoE-v3-Mixtral-8x7B-3.5bpw-h6-exl2-2.

Synthia MoE V3 Mixtral 8x7B 3.5bpw H6 EXL2 2 Parameters and Internals

LLM NameSynthia MoE V3 Mixtral 8x7B 3.5bpw H6 EXL2 2
RepositoryOpen on ๐Ÿค— 
Required VRAM20.7 GB
Updated2024-02-29
MaintainerLoneStriker
Model Typemixtral
Model Files  8.6 GB: 1-of-3   8.6 GB: 2-of-3   3.5 GB: 3-of-3
Quantization Typeexl2
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003