Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 by Kearm

 ยป  All LLMs  ยป  Kearm  ยป  Dolphin 2.7 Mixtral 8x7b 8bpw EXL2   URL Share it on

  Autotrain compatible   Conversational Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:ise-uiuc/magicoder-evo... Dataset:ise-uiuc/magicoder-oss... Dataset:jondurbin/airoboros-2....   Dataset:ldjnr/capybara   Dataset:teknium/openhermes   En   Endpoints compatible   Exl2   Instruct   Mixtral   Moe   Pytorch   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 Parameters and Internals

LLM NameDolphin 2.7 Mixtral 8x7b 8bpw EXL2
Repository ๐Ÿค—https://huggingface.co/Kearm/dolphin-2.7-mixtral-8x7b-8bpw-exl2 
Required VRAM46.8 GB
Updated2024-09-16
MaintainerKearm
Model Typemixtral
Instruction-BasedYes
Model Files  8.6 GB: 1-of-6   8.6 GB: 2-of-6   8.6 GB: 3-of-6   8.6 GB: 4-of-6   8.6 GB: 5-of-6   3.8 GB: 6-of-6
Supported Languagesen
Quantization Typeexl2
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16
Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 (Kearm/dolphin-2.7-mixtral-8x7b-8bpw-exl2)

Best Alternatives to Dolphin 2.7 Mixtral 8x7b 8bpw EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...M 2 8x22B Beige 5.0bpw H6 EXL264K / 88.5 GB80
...M 2 8x22B Beige 2.4bpw H6 EXL264K / 42.7 GB60
...M 2 8x22B Beige 3.0bpw H6 EXL264K / 53.2 GB50
...M 2 8x22B Beige 4.0bpw H6 EXL264K / 70.8 GB50
...B Instruct V0.1 8.0bpw H8 EXL264K / 120.2 GB11
...8x22b Instruct Oh EXL2 2.25bpw64K / 40.1 GB11
...eryTour V2 8x7B 4.5bpw H6 EXL232K / 26.5 GB122
...it MoE 2bitgs8 Metaoffload HQQ32K / 24.1 GB420
... 4bit MoE 3bit Metaoffload HQQ32K / 22.4 GB313
...x7b Zloss Bpw350 H6 EXL2 Rpcal32K / 20.7 GB784
Note: green Score (e.g. "73.2") means that the model is better than Kearm/dolphin-2.7-mixtral-8x7b-8bpw-exl2.

Rank the Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 35941 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072803