Hyperion 3.0 Mixtral 3x7B by Locutusque

 ยป  All LLMs  ยป  Locutusque  ยป  Hyperion 3.0 Mixtral 3x7B   URL Share it on

  Autotrain compatible Dataset:locutusque/dibt-instru... Dataset:locutusque/hyperion-v3...   Dataset:pygmalionai/pippa   En   Endpoints compatible   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Hyperion 3.0 Mixtral 3x7B Benchmarks

Hyperion 3.0 Mixtral 3x7B (Locutusque/Hyperion-3.0-Mixtral-3x7B)

Hyperion 3.0 Mixtral 3x7B Parameters and Internals

Model Type 
Mixture of Experts (MoE)
Use Cases 
Limitations:
Not suitable for production environments or critical applications
Considerations:
Intended for research and experimentation purposes only.
Additional Notes 
The model uses the `hyperion-3.0-beta` architecture as the base, with a `bfloat16` output dtype. The gating mechanism is set to `hidden` and two experts are consulted per token.
Supported Languages 
en (English (en))
Training Details 
Data Sources:
Locutusque/dibt-instruct, PygmalionAI/PIPPA, Locutusque/hyperion-v3.0
Methodology:
QLoRA and SFT
Model Architecture:
hyperion-3.0-beta
LLM NameHyperion 3.0 Mixtral 3x7B
Repository ๐Ÿค—https://huggingface.co/Locutusque/Hyperion-3.0-Mixtral-3x7B 
Model Size18.5b
Required VRAM37.1 GB
Updated2025-03-14
MaintainerLocutusque
Model Typemixtral
Model Files  1.9 GB: 1-of-19   2.0 GB: 2-of-19   2.0 GB: 3-of-19   2.0 GB: 4-of-19   2.0 GB: 5-of-19   2.0 GB: 6-of-19   2.0 GB: 7-of-19   2.0 GB: 8-of-19   2.0 GB: 9-of-19   2.0 GB: 10-of-19   2.0 GB: 11-of-19   2.0 GB: 12-of-19   2.0 GB: 13-of-19   2.0 GB: 14-of-19   2.0 GB: 15-of-19   2.0 GB: 16-of-19   2.0 GB: 17-of-19   2.0 GB: 18-of-19   1.2 GB: 19-of-19
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.38.2
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Hyperion 3.0 Mixtral 3x7B

Best Alternatives
Context / RAM
Downloads
Likes
Lumina 3.532K / 37.1 GB30150
EastAsia 4x7B MoE Experiment32K / 37.1 GB17461
Topxtral 4x7B V0.132K / 37.1 GB17084
Blitz AI MoE V0.732K / 37.1 GB121
Blitz AI MoE V0.432K / 37.1 GB171
HeroBophades 3x7B32K / 37.1 GB232
Wizard Kun Lake 3x7B MoE32K / 37.1 GB131
NaruMOE 3x7B V232K / 37.1 GB90
MoE 3x7b QA Code Inst32K / 37 GB244
Pearl 3x7B32K / 37.1 GB222
Note: green Score (e.g. "73.2") means that the model is better than Locutusque/Hyperion-3.0-Mixtral-3x7B.

Rank the Hyperion 3.0 Mixtral 3x7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45019 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227