Dolphin 2.9.2 Mixtral 8x22b 6.0bpw H8 EXL2 by FuturisticVibes

 ยป  All LLMs  ยป  FuturisticVibes  ยป  Dolphin 2.9.2 Mixtral 8x22b 6.0bpw H8 EXL2   URL Share it on

  6-bit   Autotrain compatible   Axolotl Base model:mistral-community/m... Base model:quantized:mistral-c...   Conversational Dataset:abacusai/systemchat-1.... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:huggingfaceh4/ultracha...   Dataset:internlm/agent-flan Dataset:locutusque/function-ca... Dataset:m-a-p/codefeedback-fil... Dataset:microsoft/orca-math-wo...   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Exl2   Generated from trainer   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Dolphin 2.9.2 Mixtral 8x22b 6.0bpw H8 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.9.2 Mixtral 8x22b 6.0bpw H8 EXL2 (FuturisticVibes/dolphin-2.9.2-mixtral-8x22b-6.0bpw-h8-exl2)

Dolphin 2.9.2 Mixtral 8x22b 6.0bpw H8 EXL2 Parameters and Internals

Model Type 
instruction, conversational, coding
Additional Notes 
This model was trained on data generated from GPT4, among other models. Suggested use includes implementing own alignment layer before exposing the model as a service.
Supported Languages 
en (unknown)
Training Details 
Data Sources:
GPT4, other models
Methodology:
FFT on 50% parameters using ChatML prompt template format
Context Length:
64000
Training Time:
1 week on 8xH100
Hardware Used:
8xH100 node provided by Crusoe Cloud
Input Output 
Input Format:
Standard prompting format for AI models
Output Format:
Standard output for AI models
LLM NameDolphin 2.9.2 Mixtral 8x22b 6.0bpw H8 EXL2
Repository ๐Ÿค—https://huggingface.co/FuturisticVibes/dolphin-2.9.2-mixtral-8x22b-6.0bpw-h8-exl2 
Base Model(s)  mistral-community/Mixtral-8x22B-v0.1   mistral-community/Mixtral-8x22B-v0.1
Required VRAM105.8 GB
Updated2025-02-05
MaintainerFuturisticVibes
Model Typemixtral
Model Files  8.6 GB: 1-of-13   8.6 GB: 2-of-13   8.5 GB: 3-of-13   8.6 GB: 4-of-13   8.5 GB: 5-of-13   8.6 GB: 6-of-13   8.5 GB: 7-of-13   8.6 GB: 8-of-13   8.5 GB: 9-of-13   8.6 GB: 10-of-13   8.5 GB: 11-of-13   8.6 GB: 12-of-13   3.1 GB: 13-of-13
Supported Languagesen
Quantization Typeexl2
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length65536
Model Max Length65536
Transformers Version4.40.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Best Alternatives to Dolphin 2.9.2 Mixtral 8x22b 6.0bpw H8 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...oE V0.1 DPO F16 4.0bpw H6 EXL2195K / 31.3 GB60
...oE V0.1 DPO F16 5.0bpw H6 EXL2195K / 38.8 GB50
WizardLM 2 8x22 EXL2 4.0bpw64K / 70.9 GB71
...rdLM 2 8x22B Beige EXL2 5.0bpw64K / 88.4 GB130
...M 2 8x22B Beige 4.0bpw H6 EXL264K / 70.8 GB100
...M 2 8x22B Beige 3.0bpw H6 EXL264K / 53.2 GB60
...M 2 8x22B Beige 5.0bpw H6 EXL264K / 88.5 GB60
...M 2 8x22B Beige 2.4bpw H6 EXL264K / 42.7 GB50
...B Instruct V0.1 8.0bpw H8 EXL264K / 120.2 GB51
...2 Mixtral 8x22b 8.0bpw H8 EXL264K / 125.1 GB72

Rank the Dolphin 2.9.2 Mixtral 8x22b 6.0bpw H8 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227