Dolphin 2.6 Mixtral 8x7b GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Dolphin 2.6 Mixtral 8x7b GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:cognitivecomputatio... Base model:quantized:cognitive...   Conversational   Dataset:ehartford/dolphin Dataset:ehartford/dolphin-code... Dataset:ise-uiuc/magicoder-evo... Dataset:ise-uiuc/magicoder-oss... Dataset:jondurbin/airoboros-2....   Dataset:ldjnr/capybara   Dataset:teknium/openhermes   En   Gptq   Instruct   Mixtral   Moe   Quantized   Region:us   Safetensors

Dolphin 2.6 Mixtral 8x7b GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.6 Mixtral 8x7b GPTQ (TheBloke/dolphin-2.6-mixtral-8x7b-GPTQ)

Dolphin 2.6 Mixtral 8x7b GPTQ Parameters and Internals

Model Type 
mixtral
Additional Notes 
The model is uncensored and should be used with caution as it is highly compliant. See developer's blog for more insights.
Supported Languages 
en (proficient)
Training Details 
Data Sources:
ehartford/dolphin, jondurbin/airoboros-2.2.1, ehartford/dolphin-coder, teknium/openhermes, ise-uiuc/Magicoder-OSS-Instruct-75K, ise-uiuc/Magicoder-Evol-Instruct-110K, LDJnr/Capybara
Methodology:
qLoRA and Axolotl
Context Length:
16000
Training Time:
3 days
Hardware Used:
4x A100 GPUs
Model Architecture:
Mixtral-8x7b with a 32k context updated to 16k
Input Output 
Input Format:
ChatML
Accepted Modalities:
text
Output Format:
text
LLM NameDolphin 2.6 Mixtral 8x7b GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/dolphin-2.6-mixtral-8x7b-GPTQ 
Model NameDolphin 2.6 Mixtral 8X7B
Model CreatorCognitive Computations
Base Model(s)  cognitivecomputations/dolphin-2.6-mixtral-8x7b   cognitivecomputations/dolphin-2.6-mixtral-8x7b
Model Size6.1b
Required VRAM23.8 GB
Updated2025-03-12
MaintainerTheBloke
Model Typemixtral
Instruction-BasedYes
Model Files  23.8 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Best Alternatives to Dolphin 2.6 Mixtral 8x7b GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...ixtral 8x7B Instruct V0.1 GPTQ32K / 23.8 GB214020136
Dolphin 2.7 Mixtral 8x7b GPTQ32K / 23.8 GB52819
Dolphin 2.5 Mixtral 8x7b GPTQ32K / 23.8 GB269111
...xtral Instruct 8x7b Zloss GPTQ32K / 23.8 GB452
....1 LimaRP ZLoss DARE TIES GPTQ32K / 23.8 GB286
...nstruct V0.1 LimaRP ZLoss GPTQ32K / 23.8 GB263
...tLM Mixtral 8x7B Instruct GPTQ32K / 23.8 GB663
... Mixtral 8x7b Instruct V3 GPTQ32K / 23.8 GB312
...ixtral 8x7B V0.1 Dolly15K GPTQ32K / 23.8 GB162
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/dolphin-2.6-mixtral-8x7b-GPTQ.

Rank the Dolphin 2.6 Mixtral 8x7b GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 44902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227