Mixtral 7Bx2 MoE 13B by cloudyu

 ยป  All LLMs  ยป  cloudyu  ยป  Mixtral 7Bx2 MoE 13B   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Instruct   License:cc-by-nc-4.0   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Rank the Mixtral 7Bx2 MoE 13B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mixtral 7Bx2 MoE 13B (cloudyu/Mixtral_7Bx2_MoE_13B)

Best Alternatives to Mixtral 7Bx2 MoE 13B

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
MixTAO 7Bx2 MoE Instruct V7.032K / 25.7 GB62719
MemGPT DPO MoE Test32K / 25.8 GB15
...tral 7B Instruct V0.2 2x7B MoE32K / 25.8 GB13684
Mistral Math 2x7b Mix32K / 25.8 GB3794
Megatron V3 2x7B32K / 25.8 GB6333
...tral Instruct MoE Experimental32K / 25.8 GB6272
MoEstral 2x7B32K / 25.8 GB72
Rain 2x7B MoE 32K V0.132K / 25.8 GB12
My Mixtral 2x7B32K / 25.8 GB41
Mistral 2x7b V0.132K / 25.8 GB31

Mixtral 7Bx2 MoE 13B Parameters and Internals

LLM NameMixtral 7Bx2 MoE 13B
RepositoryOpen on ๐Ÿค— 
Model Size12.9b
Required VRAM25.8 GB
Updated2024-07-07
Maintainercloudyu
Model Typemixtral
Instruction-BasedYes
Model Files  9.9 GB: 1-of-3   10.0 GB: 2-of-3   5.9 GB: 3-of-3
Model ArchitectureMixtralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 34531 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801