LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

QuartetAnemoi 70B T0.0001 by alchemonaut

What open-source LLMs or SLMs are you in search of? 18870 in total.

 ยป  All LLMs  ยป  alchemonaut  ยป  QuartetAnemoi 70B T0.0001   URL Share it on

  Autotrain compatible   Endpoints compatible   License:other   Llama   Merge   Model-index   Region:us   Safetensors   Sharded   Tensorflow

QuartetAnemoi 70B T0.0001 Benchmarks

Rank the QuartetAnemoi 70B T0.0001 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
QuartetAnemoi 70B T0.0001 (alchemonaut/QuartetAnemoi-70B-t0.0001)

Best Alternatives to QuartetAnemoi 70B T0.0001

Best Alternatives
HF Rank
XuanYuan 70B79.558K / 138.3 GB87442
Tigerbot 70B Chat V277.924K / 139.2 GB3249
Miqu 70B Alpaca DPO76.631K / 138.7 GB6455
Miqu 1 70B Sf76.5931K / 138.7 GB18738187
BoreanGale 70B76.4832K / 137.8 GB9074
Tigerbot 70B Chat V475.954K / 139.2 GB9001
OrcaHermes Mistral 70B Miqu75.5131K / 138 GB671
Senku 70B Full75.3631K / 138.7 GB1677111
Tulu 2 DPO 70B73.778K / 138 GB3847137
Aurora Nights 70B V1.073.774K / 137.8 GB161415
Note: green Score (e.g. "73.2") means that the model is better than alchemonaut/QuartetAnemoi-70B-t0.0001.

QuartetAnemoi 70B T0.0001 Parameters and Internals

LLM NameQuartetAnemoi 70B T0.0001
RepositoryOpen on ๐Ÿค— 
Model Size70b
Required VRAM137.8 GB
Model Typellama
Model Files  9.8 GB: 1-of-15   9.8 GB: 2-of-15   9.6 GB: 3-of-15   9.8 GB: 4-of-15   9.9 GB: 5-of-15   9.7 GB: 6-of-15   10.0 GB: 7-of-15   9.9 GB: 8-of-15   9.7 GB: 9-of-15   9.6 GB: 10-of-15   10.0 GB: 11-of-15   9.8 GB: 12-of-15   9.6 GB: 13-of-15   9.7 GB: 14-of-15   0.9 GB: 15-of-15
Model ArchitectureLlamaForCausalLM
Context Length32764
Model Max Length32764
Transformers Version4.37.1
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003