Mixtral 34Bx2 MoE 60B by cloudyu

 ยป  All LLMs  ยป  cloudyu  ยป  Mixtral 34Bx2 MoE 60B   URL Share it on

  Autotrain compatible   Endpoints compatible   License:apache-2.0   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow   Yi

Rank the Mixtral 34Bx2 MoE 60B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mixtral 34Bx2 MoE 60B (cloudyu/Mixtral_34Bx2_MoE_60B)

Quantized Models of the Mixtral 34Bx2 MoE 60B

Model
Likes
Downloads
VRAM
Mixtral 34Bx2 MoE 60B GGUF335920 GB
Mixtral 34Bx2 MoE 60B GPTQ7431 GB
Mixtral 34Bx2 MoE 60B AWQ4732 GB

Best Alternatives to Mixtral 34Bx2 MoE 60B

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
FusionNet 34Bx2 MoE V0.1195K / 121.2 GB6408
Yi 34Bx2 MoE 60B DPO195K / 121.8 GB28582
...34Bx2 MoE V0.1 Full Linear DPO195K / 121.8 GB6322
... Cloudyu Mixtral 34Bx2 MoE 60B195K / 121.8 GB6360
Yi 34Bx2 MoE 60B195K / 121.9 GB171464
Bagel Hermes 2x34B195K / 121.9 GB62715
Yi 34Bx2 MoE 200K195K / 121.9 GB18232
FusionNet 34Bx2 MoE32K / 121.2 GB12738
...DPO TomGrc FusionNet 34Bx2 MoE32K / 121.8 GB6064
Nous Hermes 2 SUS Chat 2x34B4K / 121.9 GB12783

Mixtral 34Bx2 MoE 60B Parameters and Internals

LLM NameMixtral 34Bx2 MoE 60B
RepositoryOpen on ๐Ÿค— 
Model Size60.8b
Required VRAM121.9 GB
Updated2024-07-07
Maintainercloudyu
Model Typemixtral
Model Files  9.8 GB: 1-of-13   10.0 GB: 2-of-13   10.0 GB: 3-of-13   10.0 GB: 4-of-13   10.0 GB: 5-of-13   10.0 GB: 6-of-13   10.0 GB: 7-of-13   10.0 GB: 8-of-13   10.0 GB: 9-of-13   10.0 GB: 10-of-13   10.0 GB: 11-of-13   10.0 GB: 12-of-13   2.1 GB: 13-of-13
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length200000
Model Max Length200000
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size64000
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 34531 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801