SOLAR 10.7B Instruct V1.0 128K by CallComply

 ยป  All LLMs  ยป  CallComply  ยป  SOLAR 10.7B Instruct V1.0 128K   URL Share it on

  Arxiv:2309.12284   Arxiv:2312.15166   Autotrain compatible Base model:finetune:upstage/so... Base model:upstage/solar-10.7b...   Conversational Dataset:allenai/ultrafeedback ... Dataset:c-s-ale/alpaca-gpt4-da...   Dataset:intel/orca dpo pairs   Dataset:open-orca/openorca   En   Endpoints compatible   Instruct   Llama   Model-index   Region:us   Safetensors   Sharded   Tensorflow

SOLAR 10.7B Instruct V1.0 128K Benchmarks

SOLAR 10.7B Instruct V1.0 128K (CallComply/SOLAR-10.7B-Instruct-v1.0-128k)

SOLAR 10.7B Instruct V1.0 128K Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, fine-tuning
Primary Use Cases:
single-turn conversation
Limitations:
unsuitable for multi-turn conversations such as chat.
Additional Notes 
Reported metrics from the Open LLM Leaderboard show strong performance across tasks.
Training Details 
Data Sources:
c-s-ale/alpaca-gpt4-data, Open-Orca/OpenOrca, Intel/orca_dpo_pairs, allenai/ultrafeedback_binarized_cleaned, in-house generated data utilizing Metamath
Methodology:
Instruction fine-tuning using supervised fine-tuning (SFT) and direct preference optimization (DPO).
Context Length:
128000
Model Architecture:
Depth up-scaling (DUS) with architectural modifications and continued pretraining.
Release Notes 
Version:
1.0
Notes:
Fine-tuned using instruction fine-tuning methods with support for up to 128k context length.
LLM NameSOLAR 10.7B Instruct V1.0 128K
Repository ๐Ÿค—https://huggingface.co/CallComply/SOLAR-10.7B-Instruct-v1.0-128k 
Base Model(s)  upstage/SOLAR-10.7B-v1.0   upstage/SOLAR-10.7B-v1.0
Model Size10.7b
Required VRAM21.4 GB
Updated2024-12-22
MaintainerCallComply
Model Typellama
Instruction-BasedYes
Model Files  4.9 GB: 1-of-5   5.0 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   1.7 GB: 5-of-5
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-4.0
Context Length131072
Model Max Length131072
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to SOLAR 10.7B Instruct V1.0 128K

Best Alternatives
Context / RAM
Downloads
Likes
SOLAR 10.7B V1.0 Instruct 16K16K / 21.4 GB632
SauerkrautLM SOLAR Instruct8K / 21.4 GB79846
MetaModelv28K / 21.4 GB11470
Kazemi 1.2 Solar8K / 21.4 GB00
SOLAR 10.7B Instruct V1.04K / 21.4 GB61068619
ConfigurableSOLAR 10.7B4K / 21.4 GB70562
...arbonVillain En 10.7B V2 Slerp4K / 21.4 GB147230
Familidata4K / 21.4 GB23570
...0.7B Instruct STOCK SOLAR Ties4K / 42.9 GB47660
Solar Ko Common Slerp4K / 21 GB47580
Note: green Score (e.g. "73.2") means that the model is better than CallComply/SOLAR-10.7B-Instruct-v1.0-128k.

Rank the SOLAR 10.7B Instruct V1.0 128K Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217