Nous Hermes 2 Mixtral 8x7B SFT GGUF by second-state

 ยป  All LLMs  ยป  second-state  ยป  Nous Hermes 2 Mixtral 8x7B SFT GGUF   URL Share it on

  Autotrain compatible Base model:nousresearch/nous-h...   Chatml   Distillation   Dpo   En   Finetuned   Gguf   Gpt4   Instruct   License:apache-2.0   Mixtral   Moe   Q2   Quantized   Region:us   Rlhf   Synthetic data

Rank the Nous Hermes 2 Mixtral 8x7B SFT GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Nous Hermes 2 Mixtral 8x7B SFT GGUF (second-state/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF)

Best Alternatives to Nous Hermes 2 Mixtral 8x7B SFT GGUF

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...Hermes 2 Mixtral 8x7B DPO 4bit68.432K / 10.6 GB918
...ixtral 8x7B DPO 2.4bpw H6 EXL268.432K / 14.3 GB11
...ralRPChat ZLoss 3.0bpw H6 EXL268.432K / 17.8 GB21
... Mixtral 8x7B SFT 3bpw H6 EXL268.432K / 17.8 GB60
...ralRPChat ZLoss 3.5bpw H6 EXL268.432K / 20.7 GB24
...ixtral 8x7B SFT 3.5bpw H6 EXL268.432K / 20.7 GB71
...ixtral 8x7B DPO 3.5bpw H6 EXL268.432K / 20.7 GB11
... 8x7B DPO 3.7bpw H6 EXL2 Rpcal68.432K / 21.9 GB24
...hon Mixtral V1.3.75bpw H6 EXL268.432K / 22.2 GB33
...ixtral 8x7B DPO 4.0bpw H6 EXL268.432K / 23.6 GB70
Note: green Score (e.g. "73.2") means that the model is better than second-state/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF.

Nous Hermes 2 Mixtral 8x7B SFT GGUF Parameters and Internals

LLM NameNous Hermes 2 Mixtral 8x7B SFT GGUF
RepositoryOpen on ๐Ÿค— 
Model NameNous Hermes 2 Mixtral 8X7B SFT
Model CreatorNousResearch
Base Model(s)  Nous Hermes 2 Mixtral 8x7B SFT   NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
Required VRAM17.3 GB
Updated2024-07-07
Maintainersecond-state
Model Typemixtral
Model Files  17.3 GB   24.2 GB   22.5 GB   20.4 GB   26.4 GB   28.4 GB   26.7 GB   32.2 GB   33.2 GB   32.2 GB   38.4 GB   49.6 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf|q2|q4_k|q5_k
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0.dev0
Vocabulary Size32002
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 34531 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801