Psyfighter2 13B Vore by SnakyMcSnekFace

 ยป  All LLMs  ยป  SnakyMcSnekFace  ยป  Psyfighter2 13B Vore   URL Share it on

  Arxiv:2305.14314   Arxiv:2312.03732   Arxiv:2402.01306   Autotrain compatible Base model:finetune:koboldai/l... Base model:koboldai/llama2-13b...   En   Finetuned   Llama   Not-for-all-audiences   Pytorch   Region:us   Safetensors   Sharded   Storywriting   Tensorflow

Psyfighter2 13B Vore Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Psyfighter2 13B Vore (SnakyMcSnekFace/Psyfighter2-13B-vore)

Psyfighter2 13B Vore Parameters and Internals

Model Type 
text-generation
Use Cases 
Applications:
Storywriting assistant, conversational model in chat, interactive choose-your-own-adventure text game
Primary Use Cases:
Understand vore context
Limitations:
Not intended for use by anyone below 18 years old
Supported Languages 
en (proficient)
Training Details 
Data Sources:
~55 MiB of free-form text containing stories focused around the vore theme, private dataset of the adventure transcripts in Kobold AI adventure format
Methodology:
Fine-tuning with QLoRA adapter, Adventure mode SFT, Domain adaptation
Context Length:
4096
Training Time:
~24 hours on NVIDIA GeForce RTX 4060 Ti for domain adaptation and additional 150 minutes for Adventure mode SFT
Hardware Used:
NVIDIA GeForce RTX 4060 Ti
Model Architecture:
QLoRA adapter configuration
Input Output 
Input Format:
### Instruction: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Input: {prompt} ### Response:
Release Notes 
Date:
14/09/2024
Notes:
Aligned the model for better Adventure Mode flow and improved narrative quality
Date:
09/06/2024
Notes:
Fine-tuned the model to follow Kobold AI Adventure Mode format
Date:
02/06/2024
Notes:
Fixed errors in training and merging, significantly improving the overall prose quality
Date:
25/05/2024
Notes:
Updated training process, making the model more coherent and improving the writing quality
Date:
13/04/2024
Notes:
Uploaded the first version of the model
LLM NamePsyfighter2 13B Vore
Repository ๐Ÿค—https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore 
Base Model(s)  LLaMA2 13B Psyfighter2   KoboldAI/LLaMA2-13B-Psyfighter2
Model Size13b
Required VRAM26 GB
Updated2025-02-05
MaintainerSnakyMcSnekFace
Model Typellama
Model Files  5.0 GB: 1-of-6   5.0 GB: 2-of-6   5.0 GB: 3-of-6   4.9 GB: 4-of-6   4.9 GB: 5-of-6   1.2 GB: 6-of-6
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.44.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Psyfighter2 13B Vore

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K128K / 26 GB4778113
Luminaura RP 13B128K / 26 GB90
Agent Llama2 13B 80K80K / 26.4 GB140
Chat Llama2 13B 80K80K / 52.8 GB110
LongAlign 13B 64K64K / 26 GB1713
LongAlign 13B 64K Base64K / 26 GB143
Yarn Llama 2 13B 64K64K / 26 GB489917
Openbuddy Llama2 13B V15p1 64K64K / 26.1 GB64
Openbuddy Llama2 13b64k V1564K / 26.1 GB131
Airoboros L2 13B 2.1 YaRN 64K64K / 26 GB117
Note: green Score (e.g. "73.2") means that the model is better than SnakyMcSnekFace/Psyfighter2-13B-vore.

Rank the Psyfighter2 13B Vore Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227