LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Senku 70B Full by ShinojiResearch

What open-source LLMs or SLMs are you in search of? 18870 in total.

 ยป  All LLMs  ยป  ShinojiResearch  ยป  Senku 70B Full   URL Share it on

Base model:152334h/miqu-1-70b-...   Dataset:open-orca/slimorca   Generated from trainer   License:cc0-1.0   Llama   Peft   Region:us   Safetensors   Sharded   Tensorflow

Senku 70B Full Benchmarks

Rank the Senku 70B Full Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Senku 70B Full (ShinojiResearch/Senku-70B-Full)

Best Alternatives to Senku 70B Full

Best Alternatives
HF Rank
XuanYuan 70B79.558K / 138.3 GB87442
Tigerbot 70B Chat V277.924K / 139.2 GB3249
QuartetAnemoi 70B T0.000176.8631K / 137.8 GB50918
Miqu 70B Alpaca DPO76.631K / 138.7 GB6455
Miqu 1 70B Sf76.5931K / 138.7 GB18738187
BoreanGale 70B76.4832K / 137.8 GB9074
Tigerbot 70B Chat V475.954K / 139.2 GB9001
OrcaHermes Mistral 70B Miqu75.5131K / 138 GB671
Tulu 2 DPO 70B73.778K / 138 GB3847137
Aurora Nights 70B V1.073.774K / 137.8 GB161415
Note: green Score (e.g. "73.2") means that the model is better than ShinojiResearch/Senku-70B-Full.

Senku 70B Full Parameters and Internals

LLM NameSenku 70B Full
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Miqu 1 70B Sf   152334H/miqu-1-70b-sf
Model Size70b
Required VRAM138.7 GB
Model Typellama
Model Files  4.7 GB: 1-of-29   4.7 GB: 2-of-29   5.0 GB: 3-of-29   5.0 GB: 4-of-29   4.7 GB: 5-of-29   4.7 GB: 6-of-29   4.7 GB: 7-of-29   5.0 GB: 8-of-29   5.0 GB: 9-of-29   4.7 GB: 10-of-29   4.7 GB: 11-of-29   4.7 GB: 12-of-29   5.0 GB: 13-of-29   5.0 GB: 14-of-29   4.7 GB: 15-of-29   4.7 GB: 16-of-29   4.7 GB: 17-of-29   5.0 GB: 18-of-29   5.0 GB: 19-of-29   4.7 GB: 20-of-29   4.7 GB: 21-of-29   4.7 GB: 22-of-29   5.0 GB: 23-of-29   5.0 GB: 24-of-29   4.7 GB: 25-of-29   4.7 GB: 26-of-29   4.7 GB: 27-of-29   5.0 GB: 28-of-29   3.8 GB: 29-of-29
Model ArchitectureLlamaForCausalLM
Context Length32764
Model Max Length32764
Transformers Version4.37.2
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003