Dolphin 2.6 Mixtral 8x7b by cognitivecomputations

 ยป  All LLMs  ยป  cognitivecomputations  ยป  Dolphin 2.6 Mixtral 8x7b   URL Share it on

  Autotrain compatible   Conversational   Dataset:ehartford/dolphin Dataset:ehartford/dolphin-code... Dataset:ise-uiuc/magicoder-evo... Dataset:ise-uiuc/magicoder-oss... Dataset:jondurbin/airoboros-2....   Dataset:ldjnr/capybara   Dataset:teknium/openhermes   En   Endpoints compatible   Instruct   Mixtral   Moe   Pytorch   Region:us   Sharded

Dolphin 2.6 Mixtral 8x7b Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.6 Mixtral 8x7b (cognitivecomputations/dolphin-2.6-mixtral-8x7b)

Dolphin 2.6 Mixtral 8x7b Parameters and Internals

Use Cases 
Areas:
General Chat, Structured Output, Agent Cases (Autogen, Memgpt, Functions), Role-playing
Applications:
Coding Assistance, LeetCode Problems
Primary Use Cases:
Coding, Chatbot
Limitations:
May not be ethically aligned without an additional layer
Considerations:
Implement alignment layer before exposing as service.
Additional Notes 
Model is highly compliant to any requests, even unethical ones. It is recommended to implement your own alignment layer.
Supported Languages 
en (Fully supported)
Training Details 
Data Sources:
ehartford/dolphin, jondurbin/airoboros-2.2.1, ehartford/dolphin-coder, teknium/openhermes, ise-uiuc/Magicoder-OSS-Instruct-75K, ise-uiuc/Magicoder-Evol-Instruct-110K, LDJnr/Capybara
Methodology:
qLoRA and Axolotl
Context Length:
16000
Training Time:
3 days (1.5 epochs on 4x A100s)
Hardware Used:
4x A100s
Responsible Ai Considerations 
Fairness:
Dataset was filtered to remove alignment and bias.
Accountability:
User is responsible for content created using the model.
Mitigation Strategies:
Recommended to implement own alignment layer before exposing as a service.
Input Output 
Input Format:
ChatML prompt format
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Implement own alignment layer for ethical and safe deployment.
LLM NameDolphin 2.6 Mixtral 8x7b
Repository ๐Ÿค—https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b 
Required VRAM93.6 GB
Updated2025-02-05
Maintainercognitivecomputations
Model Typemixtral
Instruction-BasedYes
Model Files  4.9 GB: 1-of-19   5.0 GB: 2-of-19   5.0 GB: 3-of-19   4.9 GB: 4-of-19   5.0 GB: 5-of-19   5.0 GB: 6-of-19   4.9 GB: 7-of-19   5.0 GB: 8-of-19   5.0 GB: 9-of-19   4.9 GB: 10-of-19   5.0 GB: 11-of-19   5.0 GB: 12-of-19   5.0 GB: 13-of-19   4.9 GB: 14-of-19   5.0 GB: 15-of-19   5.0 GB: 16-of-19   4.9 GB: 17-of-19   5.0 GB: 18-of-19   4.2 GB: 19-of-19
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Quantized Models of the Dolphin 2.6 Mixtral 8x7b

Model
Likes
Downloads
VRAM
Dolphin 2.6 Mixtral 8x7b GGUF47147715 GB
Dolphin 2.6 Mixtral 8x7b AWQ132024 GB
Dolphin 2.6 Mixtral 8x7b GPTQ62223 GB

Best Alternatives to Dolphin 2.6 Mixtral 8x7b

Best Alternatives
Context / RAM
Downloads
Likes
Dolphin 2.7 Mixtral 8x7b32K / 93.6 GB4771168
...eqlen 4096 Bs 4 Optimum 0 0 2332K /  GB70
...eqlen 4096 Bs 4 Optimum 0 0 2332K /  GB71
Empower Functions Medium32K / 93.6 GB41
Mixtral 8x7B Instruct V0.132K /  GB50
...ral 8x7b Instruct V0.1 Int4 Ov32K / 0 GB474
Mixtral 8x7B Instruct V0.1 HF32K / 93.6 GB16122
...ct V0.1 Agent Function Calling32K / 44.3 GB32
...tral 8x7B Instruct V0.1 Polish32K / 93.6 GB31
Taiwan LLM MoE Pilot32K / 93.6 GB342
Note: green Score (e.g. "73.2") means that the model is better than cognitivecomputations/dolphin-2.6-mixtral-8x7b.

Rank the Dolphin 2.6 Mixtral 8x7b Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227