LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Aurora Nights 70B V1.0 by sophosympatheia

What open-source LLMs or SLMs are you in search of? 18870 in total.

 ยป  All LLMs  ยป  sophosympatheia  ยป  Aurora Nights 70B V1.0   URL Share it on

  Arxiv:2307.11760   Autotrain compatible   En   Endpoints compatible   License:llama2   Llama   Region:us   Safetensors   Sharded   Tensorflow

Aurora Nights 70B V1.0 Benchmarks

Rank the Aurora Nights 70B V1.0 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Aurora Nights 70B V1.0 (sophosympatheia/Aurora-Nights-70B-v1.0)

Quantized Models of the Aurora Nights 70B V1.0

Aurora Nights 70B V1.0 GGUF5329 GB
Aurora Nights 70B V1.0 AWQ21936 GB
Aurora Nights 70B V1.0 GPTQ11735 GB

Best Alternatives to Aurora Nights 70B V1.0

Best Alternatives
HF Rank
XuanYuan 70B79.558K / 138.3 GB87442
Tigerbot 70B Chat V277.924K / 139.2 GB3249
QuartetAnemoi 70B T0.000176.8631K / 137.8 GB50918
Miqu 70B Alpaca DPO76.631K / 138.7 GB6455
Miqu 1 70B Sf76.5931K / 138.7 GB18738187
BoreanGale 70B76.4832K / 137.8 GB9074
Tigerbot 70B Chat V475.954K / 139.2 GB9001
OrcaHermes Mistral 70B Miqu75.5131K / 138 GB671
Senku 70B Full75.3631K / 138.7 GB1677111
Tulu 2 DPO 70B73.778K / 138 GB3847137
Note: green Score (e.g. "73.2") means that the model is better than sophosympatheia/Aurora-Nights-70B-v1.0.

Aurora Nights 70B V1.0 Parameters and Internals

LLM NameAurora Nights 70B V1.0
RepositoryOpen on ๐Ÿค— 
Model Size70b
Required VRAM137.8 GB
Model Typellama
Model Files  8.1 GB: 1-of-17   8.1 GB: 2-of-17   8.2 GB: 3-of-17   8.1 GB: 4-of-17   8.1 GB: 5-of-17   8.1 GB: 6-of-17   8.2 GB: 7-of-17   8.1 GB: 8-of-17   8.1 GB: 9-of-17   8.1 GB: 10-of-17   8.2 GB: 11-of-17   8.1 GB: 12-of-17   8.1 GB: 13-of-17   8.1 GB: 14-of-17   8.2 GB: 15-of-17   8.1 GB: 16-of-17   7.8 GB: 17-of-17
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.36.1
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003