Pantheon RP 1.6.1 12B Nemo by Gryphe

 ยป  All LLMs  ยป  Gryphe  ยป  Pantheon RP 1.6.1 12B Nemo   URL Share it on

  Axolotl Base model:finetune:mistralai/... Base model:mistralai/mistral-n...   Chatml   En   Finetuned   Instruct   Mistral   Region:us   Roleplay   Safetensors   Sharded   Tensorflow

Pantheon RP 1.6.1 12B Nemo Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Pantheon RP 1.6.1 12B Nemo (Gryphe/Pantheon-RP-1.6.1-12b-Nemo)

Pantheon RP 1.6.1 12B Nemo Parameters and Internals

Model Type 
instruct, finetune, chatml, roleplay
Use Cases 
Areas:
roleplay, chat
Applications:
general dialogue, coding help, RSS summarization
Primary Use Cases:
roleplay experiences with diverse personas
Additional Notes 
Contains various personas with unique system prompts for roleplay experiences.
Supported Languages 
en (high)
Training Details 
Data Sources:
Deduped Sonnet 3.5 SlimOrca dataset, Pantheon Roleplay dataset
Methodology:
Multi-stage finetuning
Context Length:
8000
Input Output 
Input Format:
ChatML
Accepted Modalities:
text
Performance Tips:
Experiment with temperature settings for optimal inference.
Release Notes 
Version:
1.6.1
Notes:
Found and cleaned issues in datasets. Introduced alternative Pantheon dialogue set. Trained with 8k context for longer conversations.
LLM NamePantheon RP 1.6.1 12B Nemo
Repository ๐Ÿค—https://huggingface.co/Gryphe/Pantheon-RP-1.6.1-12b-Nemo 
Base Model(s)  Mistral Nemo Base 2407   mistralai/Mistral-Nemo-Base-2407
Model Size12b
Required VRAM24.5 GB
Updated2025-02-16
MaintainerGryphe
Model Typemistral
Model Files  4.9 GB: 1-of-5   4.9 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   4.9 GB: 5-of-5
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length1024000
Model Max Length1024000
Transformers Version4.45.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<pad>
Vocabulary Size131072
Torch Data Typebfloat16

Best Alternatives to Pantheon RP 1.6.1 12B Nemo

Best Alternatives
Context / RAM
Downloads
Likes
Captain Eris Violet V0.420 12B1000K / 24.5 GB3144429
...r Nemo 12B Instruct R 21 09 241000K / 24.5 GB8702107
...s PersonalityEngine V1.1.0 12B1000K / 24.5 GB61332
Mistral Nemo Kartoffel 12B1000K / 24.5 GB3304
MN 12B Mimicore GreenSnake1000K / 24.5 GB1602
MN 12B Mag Mell R11000K / 24.5 GB4853111
Saiga Nemo 12b1000K / 24.5 GB32754438
SauerkrautLM Nemo 12B Instruct1000K / 24.5 GB2305822
MN 12B Mimicore WhiteSnake1000K / 24.5 GB793
Dolphin 2.9.3 Mistral Nemo 12B1000K / 24.5 GB763898
Note: green Score (e.g. "73.2") means that the model is better than Gryphe/Pantheon-RP-1.6.1-12b-Nemo.

Rank the Pantheon RP 1.6.1 12B Nemo Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43187 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227