Deep Miqu 103B by jukofyork

 ยป  All LLMs  ยป  jukofyork  ยป  Deep Miqu 103B   URL Share it on

  Merged Model   Autotrain compatible Base model:jukofyork/dark-miqu... Base model:jukofyork/dawn-miqu...   Endpoints compatible   License:other   Llama   Region:us   Safetensors   Sharded   Tensorflow

Rank the Deep Miqu 103B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Deep Miqu 103B (jukofyork/Deep-Miqu-103B)

Best Alternatives to Deep Miqu 103B

Best Alternatives
HF Rank
XuanYuan 70B79.558K / 138.3 GB141544
Tigerbot 70B Chat V277.924K / 139.2 GB80948
Tigerbot 70B Chat V675.958K / 139.8 GB91
Tigerbot 70B Base V173.92K / 139.1 GB200815
TigerBot 70B Chat GPTQ68.32K / 36.3 GB27
Tigerbot 70B Chat V2 GPTQ68.32K / 36.3 GB25
Tigerbot 70B Chat V2 AWQ68.32K / 37.6 GB3402
TigerBot 70B Chat AWQ68.32K / 37.6 GB21
StableBeluga2 70B GPTQ67.94K / 35.3 GB791
StableBeluga2 70B AWQ67.94K / 36.6 GB552
Note: green Score (e.g. "73.2") means that the model is better than jukofyork/Deep-Miqu-103B.

Deep Miqu 103B Parameters and Internals

LLM NameDeep Miqu 103B
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Dark Miqu 70B   Dawn Miqu 70B   jukofyork/Dark-Miqu-70B   jukofyork/Dawn-Miqu-70B
Merged ModelYes
Model Size70b
Required VRAM206.5 GB
Model Typellama
Model Files  9.6 GB: 1-of-21   10.0 GB: 2-of-21   10.0 GB: 3-of-21   10.0 GB: 4-of-21   10.0 GB: 5-of-21   9.9 GB: 6-of-21   9.6 GB: 7-of-21   9.7 GB: 8-of-21   10.0 GB: 9-of-21   9.9 GB: 10-of-21   9.9 GB: 11-of-21   9.9 GB: 12-of-21   9.7 GB: 13-of-21   9.7 GB: 14-of-21   9.8 GB: 15-of-21   9.6 GB: 16-of-21   9.8 GB: 17-of-21   10.0 GB: 18-of-21   9.6 GB: 19-of-21   9.8 GB: 20-of-21   10.0 GB: 21-of-21
Model ArchitectureLlamaForCausalLM
Context Length32764
Model Max Length32764
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 35549 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801