Narisumashi 11B V1.5 by Alsebay

 ยป  All LLMs  ยป  Alsebay  ยป  Narisumashi 11B V1.5   URL Share it on

  Autotrain compatible Base model:finetune:sao10k/fim... Base model:sao10k/fimbulvetr-1...   En   Endpoints compatible   Llama   Region:us   Roleplay   Safetensors   Sft   Sharded   Tensorflow   Trl   Unsloth

Narisumashi 11B V1.5 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Narisumashi 11B V1.5 (Alsebay/Narisumashi-11B-v1.5)

Narisumashi 11B V1.5 Parameters and Internals

Model Type 
text generation, transformers, roleplay
Use Cases 
Areas:
Roleplay
Applications:
Text generation
Limitations:
The model didn't learn all dataset information well
Additional Notes 
The model uses a dataset consisting mainly of Chinese novels with an emphasis on specific themes such as 'skinsuit', 'possession', and 'transform'.
Supported Languages 
english (basic), chinese (prompt triggering), japanese (prompt triggering)
Training Details 
Data Sources:
Chinese novels
Context Length:
8000
Input Output 
Accepted Modalities:
text
LLM NameNarisumashi 11B V1.5
Repository ๐Ÿค—https://huggingface.co/Alsebay/Narisumashi-11B-v1.5 
Base Model(s)  Fimbulvetr 11B V2   Sao10K/Fimbulvetr-11B-v2
Model Size11b
Required VRAM21.4 GB
Updated2024-12-03
MaintainerAlsebay
Model Typellama
Model Files  4.9 GB: 1-of-5   5.0 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   1.7 GB: 5-of-5
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-4.0
Context Length4096
Model Max Length4096
Transformers Version4.40.2
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Narisumashi 11B V1.5

Best Alternatives
Context / RAM
Downloads
Likes
...ral 11B Omni OP 1K 2048 Ver0.132K / 21.4 GB920
MIstral 11B Omni OP U1k Ver0.132K / 21.4 GB850
Llama 3 Synatra 11B V1 20K20K / 23 GB139
Fimbulvetr 11B V2.1 16K16K / 21.4 GB9417
Moistral 11B V28K / 21.4 GB5921
Moistral 11B V38K / 21.4 GB46897
Narumashi 11B V0.98K / 21.4 GB541
Moistral 11B V5d E48K / 21.4 GB151
Moistral 11B V5a8K / 21.4 GB201
Moistral 11B V5b8K / 21.4 GB121
Note: green Score (e.g. "73.2") means that the model is better than Alsebay/Narisumashi-11B-v1.5.

Rank the Narisumashi 11B V1.5 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227