AshhLimaRP Mistral 7B by lemonilia

 ยป  All LLMs  ยป  lemonilia  ยป  AshhLimaRP Mistral 7B   URL Share it on

  Autotrain compatible   Endpoints compatible   Gguf   Lora   Mistral   Pytorch   Quantized   Region:us   Sharded

AshhLimaRP Mistral 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
AshhLimaRP Mistral 7B (lemonilia/AshhLimaRP-Mistral-7B)

AshhLimaRP Mistral 7B Parameters and Internals

Model Type 
Roleplaying chat model
Use Cases 
Areas:
Roleplaying, Longform-oriented novel-style chat
Primary Use Cases:
1-on-1 roleplay on Internet forums
Limitations:
Short-form, IRC/Discord-style RP not supported, Can ramble or impersonate user with very long messages, No instruction tuning
Additional Notes 
The model features a length control mechanism inspired by 'Roleplay' preset in SillyTavern.
Training Details 
Data Sources:
2000 training samples up to about 9k tokens length, human-written lewd stories
Methodology:
Finetuned on Ashhwriter, a model based on human-written lewd stories, with manually picked and slightly edited RP conversations.
Context Length:
8750
Hardware Used:
2x NVidia A40 GPUs
Input Output 
Input Format:
Extended Alpaca format
Accepted Modalities:
text
Output Format:
Extended Alpaca format
Performance Tips:
Use length modifiers (e.g., 'medium') to control response length.
Release Notes 
Version:
v1
Notes:
Initial release based on Ashhwriter-Mistral-7B training.
LLM NameAshhLimaRP Mistral 7B
Repository ๐Ÿค—https://huggingface.co/lemonilia/AshhLimaRP-Mistral-7B 
Model Size7b
Required VRAM14.4 GB
Updated2024-12-21
Maintainerlemonilia
Model Files  4.4 GB   5.9 GB   2.7 GB   4.9 GB: 1-of-3   5.0 GB: 2-of-3   4.5 GB: 3-of-3
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModelForCausalLM
Licenseapache-2.0
Is Biasednone
Tokenizer ClassLlamaTokenizer
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|gate_proj|v_proj|q_proj|o_proj|up_proj|down_proj
LoRA Alpha16
LoRA Dropout0
R Param256

Quantized Models of the AshhLimaRP Mistral 7B

Model
Likes
Downloads
VRAM
AshhLimaRP Mistral 7B GGUF21023 GB
AshhLimaRP Mistral 7B GPTQ3374 GB
AshhLimaRP Mistral 7B AWQ2224 GB

Best Alternatives to AshhLimaRP Mistral 7B

Best Alternatives
Context / RAM
Downloads
Likes
...wen Math 7b 24 1 100 1 Nonmath0K / 15.2 GB3060
ReWiz 7B0K / 14.5 GB2400
ShoriRP V0.75d0K / 0.2 GB4932
CleverBoi 7B V30K / 0.2 GB1400
Boptruth NeuralMonarch 7B0K / 14.4 GB4682
...rix Philosophy Mistral 7B LoRA0K / 14.4 GB46845
...rendyol 7B Base V1 MtLoRA Entr0K / 14.6 GB910
Mistral 7B V0.1 Hitl0K / 14.4 GB150
Mistral Llm V40K / 14.4 GB100
Gemma 7B English To Hinglish0K / 17.1 GB450
Note: green Score (e.g. "73.2") means that the model is better than lemonilia/AshhLimaRP-Mistral-7B.

Rank the AshhLimaRP Mistral 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217