LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Sensualize Solar 10.7B by Sao10K

What open-source LLMs or SLMs are you in search of? 18857 in total.

 ยป  All LLMs  ยป  Sao10K  ยป  Sensualize Solar 10.7B   URL Share it on

  Autotrain compatible Base model:upstage/solar-10.7b...   En   Endpoints compatible   License:cc-by-nc-4.0   Llama   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Sensualize Solar 10.7B Benchmarks

Rank the Sensualize Solar 10.7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Sensualize Solar 10.7B (Sao10K/Sensualize-Solar-10.7B)

Quantized Models of the Sensualize Solar 10.7B

Sensualize Solar 10.7B GGUF924 GB
Sensualize Solar 10.7B GPTQ7296 GB
Sensualize Solar 10.7B AWQ096 GB

Best Alternatives to Sensualize Solar 10.7B

Best Alternatives
HF Rank
CarbonVillain En 10.7B V474.524K / 21.4 GB25135
FusionNet Linear74.434K / 21.4 GB25098
CarbonVillain En 10.7B V274.424K / 21.4 GB20531
SOLARC M 10.7B74.424K / 42.9 GB30137
StopCarbon 10.7B V574.414K / 21.4 GB19522
CarbonVillain En 10.7B V374.414K / 21.4 GB20580
Sakura SOLAR Instruct74.44K / 21.4 GB447326
MetaModel74.44K / 21.4 GB21770
MetaModelv374.394K / 21.4 GB21050
FusionNet74.384K / 21.4 GB24661
Note: green Score (e.g. "73.2") means that the model is better than Sao10K/Sensualize-Solar-10.7B.

Sensualize Solar 10.7B Parameters and Internals

LLM NameSensualize Solar 10.7B
RepositoryOpen on ๐Ÿค— 
Base Model(s)  SOLAR 10.7B V1.0   upstage/SOLAR-10.7B-v1.0
Model Size10.7b
Required VRAM21.4 GB
Model Typellama
Model Files  4.9 GB: 1-of-5   5.0 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   1.7 GB: 5-of-5   4.9 GB: 1-of-5   5.0 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   1.7 GB: 5-of-5
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003