LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Aetheria LongLORA 70B Rope8 32K Fp16 by grimulkan

What open-source LLMs or SLMs are you in search of? 18857 in total.

 ยป  All LLMs  ยป  grimulkan  ยป  Aetheria LongLORA 70B Rope8 32K Fp16   URL Share it on

  Autotrain compatible   Endpoints compatible   Fp16   License:unknown   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Rank the Aetheria LongLORA 70B Rope8 32K Fp16 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Aetheria LongLORA 70B Rope8 32K Fp16 (grimulkan/Aetheria-longLORA-70b-rope8-32k-fp16)

Best Alternatives to Aetheria LongLORA 70B Rope8 32K Fp16

Best Alternatives
HF Rank
XuanYuan 70B79.558K / 138.3 GB84242
Tigerbot 70B Chat V277.924K / 139.2 GB3249
QuartetAnemoi 70B T0.000176.8631K / 137.8 GB50918
Miqu 70B Alpaca DPO76.631K / 138.7 GB6455
BoreanGale 70B76.4832K / 137.8 GB9074
Tigerbot 70B Chat V475.954K / 139.2 GB8361
OrcaHermes Mistral 70B Miqu75.5131K / 138 GB671
Senku 70B Full75.3631K / 138.7 GB1677111
Tulu 2 DPO 70B73.778K / 138 GB3817137
Aurora Nights 70B V1.073.774K / 137.8 GB181415
Note: green Score (e.g. "73.2") means that the model is better than grimulkan/Aetheria-longLORA-70b-rope8-32k-fp16.

Aetheria LongLORA 70B Rope8 32K Fp16 Parameters and Internals

LLM NameAetheria LongLORA 70B Rope8 32K Fp16
RepositoryOpen on ๐Ÿค— 
Model Size70b
Required VRAM138.3 GB
Model Typellama
Model Files  4.1 GB: 1-of-35   4.1 GB: 2-of-35   3.9 GB: 3-of-35   4.1 GB: 4-of-35   4.0 GB: 5-of-35   3.9 GB: 6-of-35   4.1 GB: 7-of-35   4.0 GB: 8-of-35   3.9 GB: 9-of-35   4.1 GB: 10-of-35   4.0 GB: 11-of-35   3.9 GB: 12-of-35   4.1 GB: 13-of-35   4.0 GB: 14-of-35   3.9 GB: 15-of-35   4.1 GB: 16-of-35   4.0 GB: 17-of-35   3.9 GB: 18-of-35   4.1 GB: 19-of-35   4.0 GB: 20-of-35   3.9 GB: 21-of-35   4.1 GB: 22-of-35   4.0 GB: 23-of-35   3.9 GB: 24-of-35   4.1 GB: 25-of-35   4.0 GB: 26-of-35   3.9 GB: 27-of-35   4.1 GB: 28-of-35   4.0 GB: 29-of-35   3.9 GB: 30-of-35   4.1 GB: 31-of-35   4.0 GB: 32-of-35   3.9 GB: 33-of-35   4.1 GB: 34-of-35   2.1 GB: 35-of-35
Quantization Typefp16
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.34.1
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003