L3.1 MoE 2X8B Deepseek DeepHermes E32 13.7B by DavidAU

 »  All LLMs  »  DavidAU  »  L3.1 MoE 2X8B Deepseek DeepHermes E32 13.7B   URL Share it on

  128k context   Autotrain compatible Base model:deepseek-ai/deepsee... Base model:merge:deepseek-ai/d... Base model:merge:nousresearch/... Base model:nousresearch/deephe...   Cognitivecomputations   Conversational   Cot   Deephermes   Deepseek   Endpoints compatible   Fine tune   Hermes   Llama 3.1   Merge   Mixtral   Moe   R1   Reasoning   Region:us   Safetensors   Sharded   Tensorflow   Thinking

L3.1 MoE 2X8B Deepseek DeepHermes E32 13.7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
L3.1 MoE 2X8B Deepseek DeepHermes E32 13.7B (DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B)

L3.1 MoE 2X8B Deepseek DeepHermes E32 13.7B Parameters and Internals

LLM NameL3.1 MoE 2X8B Deepseek DeepHermes E32 13.7B
Repository 🤗https://huggingface.co/DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B 
Base Model(s)  ...eepHermes 3 Llama 3 8B Preview   DeepSeek R1 Distill Llama 8B   NousResearch/DeepHermes-3-Llama-3-8B-Preview   deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Model Size13.7b
Required VRAM54.9 GB
Updated2025-04-30
MaintainerDavidAU
Model Typemixtral
Model Files  4.8 GB: 1-of-12   5.0 GB: 2-of-12   5.0 GB: 3-of-12   4.9 GB: 4-of-12   5.0 GB: 5-of-12   5.0 GB: 6-of-12   5.0 GB: 7-of-12   5.0 GB: 8-of-12   5.0 GB: 9-of-12   5.0 GB: 10-of-12   3.1 GB: 11-of-12   2.1 GB: 12-of-12
Model ArchitectureMixtralForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.46.2
Tokenizer ClassLlamaTokenizer
Padding Token<|begin▁of▁sentence|>
Vocabulary Size128256
Torch Data Typefloat32

Best Alternatives to L3.1 MoE 2X8B Deepseek DeepHermes E32 13.7B

Best Alternatives
Context / RAM
Downloads
Likes
L3.1 MoE 2x8B V0.2128K / 27.3 GB46
HAI SER128K / 27.3 GB3815
L3.1 Celestial Stone 2x8B128K / 27.3 GB3123
...ma 3 2x8B Instruct MoE 64K Ctx64K / 27.3 GB124
Defne Llama3 2x8B8K / 27.4 GB68615
Penny Llama3 2x8b8K / 27.3 GB61
Kilo 2x8B8K / 27.5 GB121
Llama 3 Elyza Youko MoE 2x8B8K / 27.3 GB60
Llama 3 ELYZA Hermes 2x8B8K / 27.4 GB31
Inixion 2x8B V28K / 27.4 GB62
Note: green Score (e.g. "73.2") means that the model is better than DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B.

Rank the L3.1 MoE 2X8B Deepseek DeepHermes E32 13.7B Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46860 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227