Aura MoE 2x4B by AuraIndustries

 ยป  All LLMs  ยป  AuraIndustries  ยป  Aura MoE 2x4B   URL Share it on

Base model:finetune:interviten... Base model:intervitensinc/llam... Dataset:anthracite-core/full-o... Dataset:fourohfour/instruct ph...   Dataset:fourohfour/rp phase   Dataset:mielikki/erebus-87k   En   Instruct   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Aura MoE 2x4B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Aura MoE 2x4B (AuraIndustries/Aura-MoE-2x4B)

Aura MoE 2x4B Parameters and Internals

LLM NameAura MoE 2x4B
Repository ๐Ÿค—https://huggingface.co/AuraIndustries/Aura-MoE-2x4B 
Base Model(s)  IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml   IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
Model Size7.2b
Required VRAM14.5 GB
Updated2025-03-18
MaintainerAuraIndustries
Model Typemixtral
Instruction-BasedYes
Model Files  5.0 GB: 1-of-3   5.0 GB: 2-of-3   4.5 GB: 3-of-3
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.47.0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|finetune_right_pad_id|>
Vocabulary Size128256
Torch Data Typebfloat16

Rank the Aura MoE 2x4B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46634 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227