Dolphin 2.7 Mixtral 8x7b by cognitivecomputations

 Β»  All LLMs  Β»  cognitivecomputations  Β»  Dolphin 2.7 Mixtral 8x7b   URL Share it on

  Autotrain compatible   Conversational Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:ise-uiuc/magicoder-evo... Dataset:ise-uiuc/magicoder-oss... Dataset:jondurbin/airoboros-2....   Dataset:ldjnr/capybara   Dataset:teknium/openhermes   En   Endpoints compatible   Instruct   Mixtral   Moe   Pytorch   Region:us   Sharded

Dolphin 2.7 Mixtral 8x7b Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.7 Mixtral 8x7b (cognitivecomputations/dolphin-2.7-mixtral-8x7b)

Dolphin 2.7 Mixtral 8x7b Parameters and Internals

Model Type 
text generation, coding assistance
Additional Notes 
The model was retrained to fix performance issues. It is highly compliant and uncensored. Trust_remote_code is required.
Supported Languages 
en (proficient)
Training Details 
Data Sources:
cognitivecomputations/dolphin, jondurbin/airoboros-2.2.1, cognitivecomputations/dolphin-coder, teknium/openhermes, ise-uiuc/Magicoder-OSS-Instruct-75K, ise-uiuc/Magicoder-Evol-Instruct-110K, LDJnr/Capybara
Methodology:
qLoRA and Axolotl Prompt format using ChatML
Context Length:
16000
Training Time:
3 days
Hardware Used:
4x A100 GPUs
Input Output 
Input Format:
ChatML
Accepted Modalities:
text
Output Format:
text responses
Performance Tips:
The model is uncensored and highly compliant; it’s advised to have an alignment layer to control unethical requests.
LLM NameDolphin 2.7 Mixtral 8x7b
Repository πŸ€—https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b 
Required VRAM93.6 GB
Updated2024-12-21
Maintainercognitivecomputations
Model Typemixtral
Instruction-BasedYes
Model Files  4.9 GB: 1-of-19   5.0 GB: 2-of-19   5.0 GB: 3-of-19   4.9 GB: 4-of-19   5.0 GB: 5-of-19   5.0 GB: 6-of-19   4.9 GB: 7-of-19   5.0 GB: 8-of-19   5.0 GB: 9-of-19   4.9 GB: 10-of-19   5.0 GB: 11-of-19   5.0 GB: 12-of-19   5.0 GB: 13-of-19   4.9 GB: 14-of-19   5.0 GB: 15-of-19   5.0 GB: 16-of-19   4.9 GB: 17-of-19   5.0 GB: 18-of-19   4.2 GB: 19-of-19
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Quantized Models of the Dolphin 2.7 Mixtral 8x7b

Model
Likes
Downloads
VRAM
Dolphin 2.7 Mixtral 8x7b GGUF135828815 GB
Dolphin 2.7 Mixtral 8x7b GPTQ191214623 GB
Dolphin 2.7 Mixtral 8x7b AWQ21574324 GB
Dolphin 2.7 Mixtral 8x7b GGUF561315 GB

Best Alternatives to Dolphin 2.7 Mixtral 8x7b

Best Alternatives
Context / RAM
Downloads
Likes
Dolphin 2.6 Mixtral 8x7b32K / 93.6 GB5778204
...eqlen 4096 Bs 4 Optimum 0 0 2332K /  GB310
...eqlen 4096 Bs 4 Optimum 0 0 2332K /  GB161
Empower Functions Medium32K / 93.6 GB241
...ral 8x7b Instruct V0.1 Int4 Ov32K / 0 GB2774
Mixtral 8x7B Instruct V0.132K /  GB140
...ct V0.1 Agent Function Calling32K / 44.3 GB62
...tral 8x7B Instruct V0.1 Polish32K / 93.6 GB111
Taiwan LLM MoE Pilot32K / 93.6 GB192
...al 8x7B Instruct V0.1 GPT Fast32K /  GB71
Note: green Score (e.g. "73.2") means that the model is better than cognitivecomputations/dolphin-2.7-mixtral-8x7b.

Rank the Dolphin 2.7 Mixtral 8x7b Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217