L3.1 DarkStock 8B by rityak

 ยป  All LLMs  ยป  rityak  ยป  L3.1 DarkStock 8B   URL Share it on

  Merged Model   Arxiv:2403.19522   Autotrain compatible Base model:aifeifei798/darkido... Base model:neversleep/lumimaid... Base model:nothingiisreal/l3.1... Base model:orenguteng/llama-3.... Base model:rityak/l3.1-formaxg... Base model:sao10k/l3.1-8b-niit... Base model:v000000/l3.1-stheno...   Conversational   Endpoints compatible   Instruct   Llama   Region:us   Safetensors   Sharded   Tensorflow   Uncensored

L3.1 DarkStock 8B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
L3.1 DarkStock 8B (rityak/L3.1-DarkStock-8B)

L3.1 DarkStock 8B Parameters and Internals

Additional Notes 
This model is a merge of multiple models using the Model Stock merge method.
Training Details 
Methodology:
Merge using Model Stock method
LLM NameL3.1 DarkStock 8B
Repository ๐Ÿค—https://huggingface.co/rityak/L3.1-DarkStock-8B 
Base Model(s)  aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored   mergekit-community/L3.1-Vulca-Umboshima-8B   rityak/L3.1-FormaxGradient   Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2   mergekit-community/L3.1-Artemis-b2-8B   NeverSleep/Lumimaid-v0.2-8B   v000000/L3.1-Sthenorm-8B   nothingiisreal/L3.1-8B-Celeste-V1.5   Sao10K/L3.1-8B-Niitama-v1.1   aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored   mergekit-community/L3.1-Vulca-Umboshima-8B   rityak/L3.1-FormaxGradient   Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2   mergekit-community/L3.1-Artemis-b2-8B   NeverSleep/Lumimaid-v0.2-8B   v000000/L3.1-Sthenorm-8B   nothingiisreal/L3.1-8B-Celeste-V1.5   Sao10K/L3.1-8B-Niitama-v1.1
Merged ModelYes
Model Size8b
Required VRAM16.1 GB
Updated2024-12-21
Maintainerrityak
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Model ArchitectureLlamaForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.44.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to L3.1 DarkStock 8B

Best Alternatives
Context / RAM
Downloads
Likes
Loki V2.6 8B 1024K1024K / 16.1 GB1013
RP Naughty V1.0f 8B1024K / 16.1 GB1111
8B Unaligned BASE Pt31024K / 16.1 GB180
RP Naughty V1.1f 8B1024K / 16.1 GB1040
8B Unaligned BASE Pt51024K / 16.1 GB160
RP Naughty V1.1i 8B1024K / 16.1 GB570
RP Naughty V1.1a 8B1024K / 16.1 GB232
...ja V4.95 Undi95 7B NON FICTION1024K / 16.1 GB172
RP Naughty V1.0B 8B1024K / 16.1 GB172
Loki 8B 1024K V21024K / 16.1 GB181
Note: green Score (e.g. "73.2") means that the model is better than rityak/L3.1-DarkStock-8B.

Rank the L3.1 DarkStock 8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217