SOLAR 10.7B Instruct V1.0 Uncensored by w4r10ck

 ยป  All LLMs  ยป  w4r10ck  ยป  SOLAR 10.7B Instruct V1.0 Uncensored   URL Share it on

  Autotrain compatible   Endpoints compatible   Instruct   Llama   Model-index   Region:us   Safetensors   Sharded   Tensorflow   Uncensored

SOLAR 10.7B Instruct V1.0 Uncensored Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
SOLAR 10.7B Instruct V1.0 Uncensored (w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored)

SOLAR 10.7B Instruct V1.0 Uncensored Parameters and Internals

Additional Notes 
This model is finetuned to be less censored.
Training Details 
Data Sources:
https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1
Methodology:
Trained using Lora and DPOTrainer
LLM NameSOLAR 10.7B Instruct V1.0 Uncensored
Repository ๐Ÿค—https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored 
Model Size10.7b
Required VRAM21.4 GB
Updated2024-12-22
Maintainerw4r10ck
Model Typellama
Instruction-BasedYes
Model Files  4.9 GB: 1-of-5   5.0 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   1.7 GB: 5-of-5
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.35.2
Vocabulary Size32000
Torch Data Typefloat16

Quantized Models of the SOLAR 10.7B Instruct V1.0 Uncensored

Model
Likes
Downloads
VRAM
... Instruct V1.0 Uncensored GGUF78337574 GB
... Instruct V1.0 Uncensored GPTQ134616 GB
...B Instruct V1.0 Uncensored AWQ4646 GB

Best Alternatives to SOLAR 10.7B Instruct V1.0 Uncensored

Best Alternatives
Context / RAM
Downloads
Likes
Kimiko 10.7B V34K / 21.4 GB1011
SOLAR 10.7B Instruct V1.0 128K128K / 21.4 GB1624
SOLAR 10.7B V1.0 Instruct 16K16K / 21.4 GB632
SauerkrautLM SOLAR Instruct8K / 21.4 GB79846
MetaModelv28K / 21.4 GB11470
Kazemi 1.2 Solar8K / 21.4 GB00
SOLAR 10.7B Instruct V1.04K / 21.4 GB61068619
ConfigurableSOLAR 10.7B4K / 21.4 GB70562
...arbonVillain En 10.7B V2 Slerp4K / 21.4 GB147230
Familidata4K / 21.4 GB23570
Note: green Score (e.g. "73.2") means that the model is better than w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored.

Rank the SOLAR 10.7B Instruct V1.0 Uncensored Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40123 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217