Zephyr Orpo 141B A35b V0.1 by HuggingFaceH4

 ยป  All LLMs  ยป  HuggingFaceH4  ยป  Zephyr Orpo 141B A35b V0.1   URL Share it on

  Arxiv:2311.07911   Arxiv:2403.07691   Autotrain compatible Base model:finetune:mistral-co... Base model:mistral-community/m...   Conversational Dataset:argilla/distilabel-cap...   Endpoints compatible   Generated from trainer   Mixtral   Moe   Orpo   Region:us   Safetensors   Sharded   Tensorboard   Tensorflow   Trl

Zephyr Orpo 141B A35b V0.1 Benchmarks

Zephyr Orpo 141B A35b V0.1 (HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1)

Zephyr Orpo 141B A35b V0.1 Parameters and Internals

Model Type 
Mixture of Experts (MoE)
Additional Notes 
The model can produce problematic outputs as it has not been aligned to human preferences for safety.
Supported Languages 
English (Primary)
Training Details 
Data Sources:
argilla/distilabel-capybara-dpo-7k-binarized
Data Volume:
7k instances
Methodology:
Odds Ratio Preference Optimization (ORPO)
Training Time:
1.3 hours
Hardware Used:
4 nodes of 8 x H100s
LLM NameZephyr Orpo 141B A35b V0.1
Repository ๐Ÿค—https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1 
Base Model(s)  mistral-community/Mixtral-8x22B-v0.1   mistral-community/Mixtral-8x22B-v0.1
Model Size140.6b
Required VRAM207.2 GB
Updated2025-02-15
MaintainerHuggingFaceH4
Model Typemixtral
Model Files  5.0 GB: 1-of-59   4.8 GB: 2-of-59   4.8 GB: 3-of-59   4.8 GB: 4-of-59   4.8 GB: 5-of-59   4.8 GB: 6-of-59   4.8 GB: 7-of-59   4.8 GB: 8-of-59   4.8 GB: 9-of-59   4.8 GB: 10-of-59   4.8 GB: 11-of-59   4.8 GB: 12-of-59   4.8 GB: 13-of-59   4.8 GB: 14-of-59   4.8 GB: 15-of-59   4.8 GB: 16-of-59   4.8 GB: 17-of-59   4.8 GB: 18-of-59   4.8 GB: 19-of-59   4.8 GB: 20-of-59   4.8 GB: 21-of-59   4.8 GB: 22-of-59   4.8 GB: 23-of-59   4.9 GB: 24-of-59   5.0 GB: 25-of-59   5.0 GB: 26-of-59   4.9 GB: 27-of-59   4.8 GB: 28-of-59   4.8 GB: 29-of-59   4.8 GB: 30-of-59   4.8 GB: 31-of-59   4.8 GB: 32-of-59   4.8 GB: 33-of-59   4.8 GB: 34-of-59   4.8 GB: 35-of-59   4.8 GB: 36-of-59   4.8 GB: 37-of-59   4.8 GB: 38-of-59   4.8 GB: 39-of-59   4.8 GB: 40-of-59   4.8 GB: 41-of-59   4.8 GB: 42-of-59   4.8 GB: 43-of-59
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length65536
Model Max Length65536
Transformers Version4.39.3
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Zephyr Orpo 141B A35b V0.1

Model
Likes
Downloads
VRAM
Zephyr Orpo 141B A35b V0.1 AWQ22573 GB

Best Alternatives to Zephyr Orpo 141B A35b V0.1

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x22B Instruct V0.164K / 221.4 GB148016713
Mixtral 8x22B V0.164K / 212 GB4399674
WizardLM 2 8x22B64K / 216.8 GB7157397
Mixtral 8x22B V0.164K / 221.6 GB8327210
Mixtral 8x22B V0.364K / 221.4 GB523
XLAM 8x22b R64K / 211.8 GB259144
...ixtral 8x22B Instruct V0.1 FP864K / 140.9 GB9482
Dolphin 2.9.2 Mixtral 8x22b64K / 207.2 GB17938
...igHuggyD Grey WizardLM 2 8x22B64K / 216.6 GB184
WizardLM 2 8x22B Beige64K / 221.4 GB303
Note: green Score (e.g. "73.2") means that the model is better than HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1.

Rank the Zephyr Orpo 141B A35b V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43137 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227