Aurora Nights 103B V1.0 by sophosympatheia

 ยป  All LLMs  ยป  sophosympatheia  ยป  Aurora Nights 103B V1.0   URL Share it on

  Arxiv:2307.11760   Autotrain compatible   En   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

Aurora Nights 103B V1.0 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Aurora Nights 103B V1.0 (sophosympatheia/Aurora-Nights-103B-v1.0)

Aurora Nights 103B V1.0 Parameters and Internals

Model Type 
Roleplaying, Storytelling
Use Cases 
Areas:
Roleplaying, Creative Writing
Considerations:
You are responsible for whatever you do with it.
Additional Notes 
This model responds to prompting and is tuned for creativity in roleplaying scenarios.
Supported Languages 
en (Full Proficiency)
Training Details 
Context Length:
6144
Model Architecture:
120 layers
Input Output 
Input Format:
Example context and prompt templates provided for SillyTavern
Accepted Modalities:
text
Output Format:
Text-based responses for roleplaying and storytelling
Performance Tips:
- Recommended max context is 6144 tokens. - Use Quadratic Sampling with smoothing factor 0.2 - 0.5. - Experiment with Min-P values and repetition penalties for optimal results.
LLM NameAurora Nights 103B V1.0
Repository ๐Ÿค—https://huggingface.co/sophosympatheia/Aurora-Nights-103B-v1.0 
Model Size103b
Required VRAM206.6 GB
Updated2025-04-19
Maintainersophosympatheia
Model Typellama
Model Files  9.6 GB: 1-of-21   9.9 GB: 2-of-21   10.0 GB: 3-of-21   9.7 GB: 4-of-21   9.8 GB: 5-of-21   9.8 GB: 6-of-21   10.0 GB: 7-of-21   10.0 GB: 8-of-21   9.8 GB: 9-of-21   9.8 GB: 10-of-21   9.8 GB: 11-of-21   9.8 GB: 12-of-21   9.9 GB: 13-of-21   9.7 GB: 14-of-21   10.0 GB: 15-of-21   10.0 GB: 16-of-21   9.8 GB: 17-of-21   9.6 GB: 18-of-21   10.0 GB: 19-of-21   9.8 GB: 20-of-21   9.8 GB: 21-of-21
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Quantized Models of the Aurora Nights 103B V1.0

Model
Likes
Downloads
VRAM
Aurora Nights 103B V1.0 GGUF1275243 GB
Aurora Nights 103B V1.0 GPTQ3252 GB
Aurora Nights 103B V1.0 AWQ1154 GB

Best Alternatives to Aurora Nights 103B V1.0

Best Alternatives
Context / RAM
Downloads
Likes
Dark Miqu 103B31K / 206.4 GB302
Midnight Miqu 103B V1.531K / 206.4 GB16918
BigWeave V15 103B31K / 206.6 GB32
Midnight Miqu 103B V1.031K / 206.4 GB3713
OmegaDolphin 103B V0.14K / 175.6 GB221
Midnight Rose 103B V2.0.34K / 206.6 GB119
KitchenSink 103b4K / 178.6 GB29
Midnight Rose 103B V1.04K / 206.6 GB163
Rogue Rose 103B V0.24K / 206.5 GB3631
Lila 103B L24K / 206.2 GB00
Note: green Score (e.g. "73.2") means that the model is better than sophosympatheia/Aurora-Nights-103B-v1.0.

Rank the Aurora Nights 103B V1.0 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46490 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227