Llama 3 Monika Ddlc 8B V1 by 922-CA

 ยป  All LLMs  ยป  922-CA  ยป  Llama 3 Monika Ddlc 8B V1   URL Share it on

  4bit   Autotrain compatible   Conversational   Dataset:922-ca/mocha v1a   Endpoints compatible   Instruct   Llama   Pytorch   Quantized   Region:us   Sft   Sharded   Trl   Unsloth

Llama 3 Monika Ddlc 8B V1 Benchmarks

Llama 3 Monika Ddlc 8B V1 (922-CA/Llama-3-monika-ddlc-8b-v1)

Llama 3 Monika Ddlc 8B V1 Parameters and Internals

Model Type 
chat, role-playing
Use Cases 
Primary Use Cases:
chat model with limited RP ability
Limitations:
not the smartest model/not as capable as others for specific tasks
Considerations:
replace 'Human' and 'Assistant' with 'Player' and 'Monika' for best results
Additional Notes 
This model aims to closely reflect the characteristics of Monika from DDLC, with warnings regarding possible hallucinations and inaccurate outputs.
Training Details 
Data Sources:
dialogue scraped from game, reddit, and Twitter
Data Volume:
~600+ items
Methodology:
fine-tuning and manual editing
Training Time:
trained for 1 epoch
LLM NameLlama 3 Monika Ddlc 8B V1
Repository ๐Ÿค—https://huggingface.co/922-CA/Llama-3-monika-ddlc-8b-v1 
Model Size8b
Required VRAM16.1 GB
Updated2025-02-22
Maintainer922-CA
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Quantization Type4bit
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.38.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Llama 3 Monika Ddlc 8B V1

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct Gradient 1048K 4bit1024K / 4.5 GB212
...B Instruct Gradient 1048K 8bit1024K / 8.6 GB71
...truct Gradient 1048K Bpw6 EXL21024K / 6.7 GB102
...truct Gradient 1048K Bpw5 EXL21024K / 5.8 GB70
Llama 3 8B Instruct 1048K 4bit1024K / 4.5 GB1225
Llama 3 8B Instruct 1048K 8bit1024K / 8.6 GB2817
... Gradient 1048K 8.0bpw H8 EXL21024K / 8.6 GB83
...ct Gradient 1048K Bpw2.25 EXL21024K / 3.4 GB51
Llama 3 8B Instruct 262K 2bit256K / 2.5 GB71
...B Instruct 262k V2 EXL2 6.0bpw256K / 6.7 GB111
Note: green Score (e.g. "73.2") means that the model is better than 922-CA/Llama-3-monika-ddlc-8b-v1.

Rank the Llama 3 Monika Ddlc 8B V1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227