DareBeagel 2x7B by shadowml

 ยป  All LLMs  ยป  shadowml  ยป  DareBeagel 2x7B   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Lazymergekit   License:apache-2.0   Merge   Mergekit   Mixtral   Mlabonne/neuralbeagle14-7b   Mlabonne/neuraldaredevil-7b   Model-index   Moe   Region:us   Safetensors   Sharded   Tensorflow

Rank the DareBeagel 2x7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
DareBeagel 2x7B (shadowml/DareBeagel-2x7B)

Best Alternatives to DareBeagel 2x7B

Best Alternatives
HF Rank
MonarchCoder MoE 2x7B32K / 22.8 GB25371
...ixtral AI CyberBrain Coder 1x232K / 25.5 GB6681
MixTAO 7Bx2 MoE Instruct V7.032K / 25.7 GB322919
DARE TIES 13B32K / 25.7 GB259310
... TomGrc FusionNet 7Bx2 MoE 13B32K / 25.8 GB374849
Laser Dolphin Mixtral 2x7b DPO32K / 25.8 GB580647
MixTAO 7Bx2 MoE V8.132K / 25.8 GB299237
Mixtral 7Bx2 MoE32K / 25.8 GB377836
FusionNet 7Bx2 MoE 14B32K / 25.8 GB405435
Mixtral 7Bx2 MoE 13B32K / 25.8 GB25067

DareBeagel 2x7B Parameters and Internals

LLM NameDareBeagel 2x7B
RepositoryOpen on ๐Ÿค— 
Model Size12.9b
Required VRAM25.8 GB
Model Typemixtral
Model Files  9.9 GB: 1-of-3   10.0 GB: 2-of-3   5.9 GB: 3-of-3
Model ArchitectureMixtralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 35008 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901