LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Nous Hermes 2 Mixtral 8x7B SFT GGUF by TheBloke

What open-source LLMs or SLMs are you in search of? 18857 in total.

 ยป  All LLMs  ยป  TheBloke  ยป  Nous Hermes 2 Mixtral 8x7B SFT GGUF   URL Share it on

Base model:nousresearch/nous-h...   Chatml   Distillation   Dpo   En   Finetuned   Gguf   Gpt4   Has space   Instruct   License:apache-2.0   Mixtral   Moe   Quantized   Region:us   Rlhf   Synthetic data

Nous Hermes 2 Mixtral 8x7B SFT GGUF Benchmarks

Rank the Nous Hermes 2 Mixtral 8x7B SFT GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Nous Hermes 2 Mixtral 8x7B SFT GGUF (TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF)

Best Alternatives to Nous Hermes 2 Mixtral 8x7B SFT GGUF

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
FusionNet 34Bx2 MoE GGUF67.60K / 22.4 GB46
...AO 7Bx2 MoE Instruct V7.0 GGUF67.10K / 4.8 GB2539
Helion 4x34B GGUF66.20K / 41.5 GB13
Cosmosis 3x34B GGUF66.10K / 31.9 GB15
...Top 5x7B Instruct S5 V0.1 GGUF65.90K / 2.7 GB41
Go Bruins V2.1.1 GGUF65.70K / 3.1 GB27
Quantum DPO V0.1 GGUF65.70K / 3.1 GB41
Quantum V0.01 GGUF65.50K / 3.1 GB42
Sakura SOLAR Instruct GGUF65.20K / 4.5 GB55
...rautLM UNA SOLAR Instruct GGUF65.10K / 4.5 GB1111
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF.

Nous Hermes 2 Mixtral 8x7B SFT GGUF Parameters and Internals

LLM NameNous Hermes 2 Mixtral 8x7B SFT GGUF
RepositoryOpen on ๐Ÿค— 
Model NameNous Hermes 2 Mixtral 8X7B SFT
Model CreatorNousResearch
Base Model(s)  Nous Hermes 2 Mixtral 8x7B SFT   NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
Required VRAM17.3 GB
Updated2024-02-28
MaintainerTheBloke
Model Typemixtral
Model Files  17.3 GB   22.5 GB   26.4 GB   28.4 GB   32.2 GB   33.2 GB   38.4 GB   49.6 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseapache-2.0
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003