LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Dolphin 2.2 70B 5.0bpw H6 EXL2 by LoneStriker

What open-source LLMs or SLMs are you in search of? 18732 in total.

 ยป  All LLMs  ยป  LoneStriker  ยป  Dolphin 2.2 70B 5.0bpw H6 EXL2   URL Share it on

  Merged Model   Autotrain compatible   Dataset:ehartford/dolphin Dataset:ehartford/samantha-dat... Dataset:ehartford/wizardlm evo... Dataset:jondurbin/airoboros-2....   En   Endpoints compatible   Exl2   Instruct   License:llama2   Llama   Pytorch   Quantized   Region:us   Sharded   Tensorflow

Rank the Dolphin 2.2 70B 5.0bpw H6 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Dolphin 2.2 70B 5.0bpw H6 EXL2 (LoneStriker/dolphin-2.2-70b-5.0bpw-h6-exl2)

Best Alternatives to Dolphin 2.2 70B 5.0bpw H6 EXL2

Best Alternatives
HF Rank
Dolphin 2.2 70B70.64K / 138 GB192943
SOLAR 0 70B 16bit70.114K / 138 GB2953250
Platypus2 70B Instruct69.34K / 138 GB3235172
MegaDolphin 120B68.914K / 240.7 GB242359
ORCA LLaMA 70B QLoRA67.64K / 138 GB219752
Platypus QLoRA LLaMA 70b67.574K / 138 GB21303
Llama 2 70B Instruct67.382K / 138 GB195162
Instruct Llama70B Dolly15k66.424K / 138 GB35760
Swallow 70B Instruct Hf65.744K / 139.1 GB1058933
Dolphin 2.2 70B GPTQ61.94K / 35.3 GB45
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/dolphin-2.2-70b-5.0bpw-h6-exl2.

Dolphin 2.2 70B 5.0bpw H6 EXL2 Parameters and Internals

LLM NameDolphin 2.2 70B 5.0bpw H6 EXL2
RepositoryOpen on ๐Ÿค— 
Merged ModelYes
Model Size70b
Required VRAM43.6 GB
Model Typellama
Model Files  8.6 GB: 1-of-6   8.5 GB: 2-of-6   8.5 GB: 3-of-6   8.5 GB: 4-of-6   8.6 GB: 5-of-6   0.9 GB: 6-of-6
Supported Languagesen
Quantization Typeexl2
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.34.1
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003