Linkbricks Horizon AI Korean Advanced 8B COT Boost by Saxo

 ยป  All LLMs  ยป  Saxo  ยป  Linkbricks Horizon AI Korean Advanced 8B COT Boost   URL Share it on

  Autotrain compatible Base model:finetune:saxo/linkb... Base model:saxo/linkbricks-hor...   Cn   Conversational   Dataset:kuotient/gsm8k-ko Dataset:lilacai/glaive-functio... Dataset:maywell/ko ultrafeedba... Dataset:saxo/en ko translation... Dataset:saxo/en ko translation...   Dataset:saxo/ko-news-corpus-1   Dataset:saxo/ko-news-corpus-2   Dataset:saxo/ko-news-corpus-3   Dataset:saxo/ko-news-corpus-4   Dataset:saxo/ko-news-corpus-5   Dataset:saxo/ko-news-corpus-6   Dataset:saxo/ko-news-corpus-7   Dataset:saxo/ko-news-corpus-8   Dataset:saxo/ko-news-corpus-9 Dataset:saxo/ko aspect sentime... Dataset:saxo/ko cn translation... Dataset:saxo/ko government qa ... Dataset:saxo/ko jp translation... Dataset:saxo/ko summarization ... Dataset:saxo/openorca cleaned ... Dataset:youjunhyeok/ko-orca-pa...   En   Endpoints compatible   Jp   Ko   Llama   Region:us   Safetensors   Sharded   Tensorflow

Linkbricks Horizon AI Korean Advanced 8B COT Boost Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Linkbricks Horizon AI Korean Advanced 8B COT Boost (Saxo/Linkbricks-Horizon-AI-Korean-Advanced-8B-COT-boost)

Linkbricks Horizon AI Korean Advanced 8B COT Boost Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
AI, Big Data
Additional Notes 
- Tokenizer uses the base model without word expansion - Models enhanced with high-dimensional analysis of math and decision making - Enhanced for COT(Chain of Thought) performance boost up - Support for Korean Functioncall and Tool Calling - Deepspeed Stage=3, use rslora and BAdam Layer Mode
Supported Languages 
ko (fluent), en (fluent), jp (fluent), cn (fluent)
Training Details 
Data Sources:
Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset, Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset, Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled, Saxo/ko-news-corpus-1, Saxo/ko-news-corpus-2, Saxo/ko-news-corpus-3, Saxo/ko-news-corpus-4, Saxo/ko-news-corpus-5, Saxo/ko-news-corpus-6, Saxo/ko-news-corpus-7, Saxo/ko-news-corpus-8, Saxo/ko-news-corpus-9, maywell/ko_Ultrafeedback_binarized, youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo, lilacai/glaive-function-calling-v2-sharegpt, kuotient/gsm8k-ko
Data Volume:
10M korean datasets
Methodology:
SFT->DPO
Context Length:
128000
Hardware Used:
8 H100-80G GPUs
Input Output 
Accepted Modalities:
text
LLM NameLinkbricks Horizon AI Korean Advanced 8B COT Boost
Repository ๐Ÿค—https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Korean-Advanced-8B-COT-boost 
Base Model(s)  Saxo/Linkbricks-Horizon-AI-Nous-Hermes-3-Llama3.1-Korean-cpt-8b   Saxo/Linkbricks-Horizon-AI-Nous-Hermes-3-Llama3.1-Korean-cpt-8b
Model Size8b
Required VRAM16.1 GB
Updated2025-03-24
MaintainerSaxo
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesko en jp cn
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.43.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|im_end|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Linkbricks Horizon AI Korean Advanced 8B COT Boost

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB5272682
A61024K / 16.1 GB3860
A81024K / 16.1 GB2840
A41024K / 16.1 GB3630
A21024K / 16.1 GB3600
A181024K / 16.1 GB2720
A101024K / 16.1 GB3030
A121024K / 16.1 GB2560
A11024K / 16.1 GB3080
C311024K / 16.1 GB1830
Note: green Score (e.g. "73.2") means that the model is better than Saxo/Linkbricks-Horizon-AI-Korean-Advanced-8B-COT-boost.

Rank the Linkbricks Horizon AI Korean Advanced 8B COT Boost Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45494 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227