Mixtral Megamerge Dare 8x7b V1 by martyn

 ยป  All LLMs  ยป  martyn  ยป  Mixtral Megamerge Dare 8x7b V1   URL Share it on

  Autotrain compatible   Conversational   Dare   En   Merge   Mixtral   Moe   Pytorch   Region:us   Sharded   Super mario merge   Tensorflow

Mixtral Megamerge Dare 8x7b V1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mixtral Megamerge Dare 8x7b V1 (martyn/mixtral-megamerge-dare-8x7b-v1)

Mixtral Megamerge Dare 8x7b V1 Parameters and Internals

Model Type 
text-generation
Additional Notes 
The model seems to generalize instruct styles but the MoE gates are not modified.
Supported Languages 
en (proficient)
Training Details 
Data Sources:
cognitivecomputations/dolphin-2.6-mixtral-8x7b, mistralai/Mixtral-8x7B-v0.1, mistralai/Mixtral-8x7B-Instruct-v0.1
Methodology:
Merging using safetensors-merge-supermario
LLM NameMixtral Megamerge Dare 8x7b V1
Repository ๐Ÿค—https://huggingface.co/martyn/mixtral-megamerge-dare-8x7b-v1 
Required VRAM93.6 GB
Updated2025-02-23
Maintainermartyn
Model Typemixtral
Model Files  4.9 GB: 1-of-19   5.0 GB: 2-of-19   5.0 GB: 3-of-19   4.9 GB: 4-of-19   5.0 GB: 5-of-19   5.0 GB: 6-of-19   4.9 GB: 7-of-19   5.0 GB: 8-of-19   5.0 GB: 9-of-19   4.9 GB: 10-of-19   5.0 GB: 11-of-19   5.0 GB: 12-of-19   5.0 GB: 13-of-19   4.9 GB: 14-of-19   5.0 GB: 15-of-19   5.0 GB: 16-of-19   4.9 GB: 17-of-19   5.0 GB: 18-of-19   4.2 GB: 19-of-19
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Best Alternatives to Mixtral Megamerge Dare 8x7b V1

Best Alternatives
Context / RAM
Downloads
Likes
Llama 3 IMPACTS 2x8B 64K MLX64K / 27.4 GB174
BiMediX Bi32K / 93.6 GB117795
Dolphin 2.7 Mixtral 8x7b32K / 93.6 GB4025169
Dolphin 2.6 Mixtral 8x7b32K / 93.6 GB4068206
...eqlen 4096 Bs 4 Optimum 0 0 2332K /  GB50
...eqlen 4096 Bs 4 Optimum 0 0 2332K /  GB81
Empower Functions Medium32K / 93.6 GB141
Synatra Mixtral 8x7B32K / 93.6 GB182914
Mixtral 8x7B Instruct V0.132K /  GB70
...ral 8x7b Instruct V0.1 Int4 Ov32K / 0 GB564
Note: green Score (e.g. "73.2") means that the model is better than martyn/mixtral-megamerge-dare-8x7b-v1.

Rank the Mixtral Megamerge Dare 8x7b V1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43508 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227