Llama 3 Chatty 2x8B by Undi95

 ยป  All LLMs  ยป  Undi95  ยป  Llama 3 Chatty 2x8B   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Instruct   Merge   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Llama 3 Chatty 2x8B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3 Chatty 2x8B (Undi95/Llama-3-Chatty-2x8B)

Llama 3 Chatty 2x8B Parameters and Internals

Model Type 
Text Generation
Use Cases 
Areas:
Role Playing, Text Generation
Additional Notes 
The model effectively merges different RP format expertise into a stable product.
Training Details 
Data Sources:
RP data, non RP data
Data Volume:
50% non RP
Methodology:
Fine-tuning for specific RP formats
Model Architecture:
MoE of 2x Llama-3-Instruct-8B models
Input Output 
Input Format:
Llama3 prompt template
Accepted Modalities:
Text
Output Format:
Generated RP text
LLM NameLlama 3 Chatty 2x8B
Repository ๐Ÿค—https://huggingface.co/Undi95/Llama-3-Chatty-2x8B 
Model Size13.7b
Required VRAM27.3 GB
Updated2025-03-13
MaintainerUndi95
Model Typemixtral
Instruction-BasedYes
Model Files  5.0 GB: 1-of-6   4.9 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   4.9 GB: 5-of-6   2.5 GB: 6-of-6
Model ArchitectureMixtralForCausalLM
Licensecc-by-nc-4.0
Context Length8192
Model Max Length8192
Transformers Version4.41.0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|begin_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Quantized Models of the Llama 3 Chatty 2x8B

Model
Likes
Downloads
VRAM
Llama 3 Chatty 2x8B AWQ0118 GB

Best Alternatives to Llama 3 Chatty 2x8B

Best Alternatives
Context / RAM
Downloads
Likes
L3.1 Celestial Stone 2x8B128K / 27.3 GB3123
...ma 3 2x8B Instruct MoE 64K Ctx64K / 27.3 GB124
Defne Llama3 2x8B8K / 27.4 GB95815
Inixion 2x8B V28K / 27.4 GB62
MoE Llama3 8bx2 Rag8K / 27.3 GB100
Inixion 2x8B8K / 27.5 GB91
FinalFintetuning XVIII 2x8B8K / 27.5 GB244
Llama 3 Teal Instruct 2x8B MoE8K / 27.3 GB81
...lama3 2x8b MoE 41K Experiment18K / 27.3 GB112
Llama 3 8Bx2 MoE DPO8K / 27.4 GB81
Note: green Score (e.g. "73.2") means that the model is better than Undi95/Llama-3-Chatty-2x8B.

Rank the Llama 3 Chatty 2x8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45005 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227