Linkbricks Horizon AI Korean Advanced 27B by Saxo

 ยป  All LLMs  ยป  Saxo  ยป  Linkbricks Horizon AI Korean Advanced 27B   URL Share it on

  Autotrain compatible Base model:google/gemma-2-27b-... Base model:quantized:google/ge...   Cn   Conversational   Dataset:kuotient/gsm8k-ko Dataset:lilacai/glaive-functio... Dataset:maywell/ko ultrafeedba... Dataset:saxo/en ko translation... Dataset:saxo/en ko translation...   Dataset:saxo/ko-news-corpus-1   Dataset:saxo/ko-news-corpus-2   Dataset:saxo/ko-news-corpus-3   Dataset:saxo/ko-news-corpus-4   Dataset:saxo/ko-news-corpus-5   Dataset:saxo/ko-news-corpus-6   Dataset:saxo/ko-news-corpus-7   Dataset:saxo/ko-news-corpus-8   Dataset:saxo/ko-news-corpus-9 Dataset:saxo/ko aspect sentime... Dataset:saxo/ko cn translation... Dataset:saxo/ko government qa ... Dataset:saxo/ko jp translation... Dataset:saxo/ko summarization ... Dataset:saxo/openorca cleaned ... Dataset:youjunhyeok/ko-orca-pa...   En   Endpoints compatible   Gemma2   Gguf   Jp   Ko   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Linkbricks Horizon AI Korean Advanced 27B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Linkbricks Horizon AI Korean Advanced 27B (Saxo/Linkbricks-Horizon-AI-Korean-Advanced-27B)

Linkbricks Horizon AI Korean Advanced 27B Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Research, Commercial applications
Applications:
Language translation, Mathematics, Logic judgment, Social media analysis, AI enhanced writing
Primary Use Cases:
Cross-language Processing, Complex problem-solving, High-dimensional analysis
Considerations:
Tokenizer does not expand vocabulary of the base model
Additional Notes 
Tokenization remains the same as the base model without word expansion.
Supported Languages 
ko (Korean), en (English), jp (Japanese), cn (Chinese)
Training Details 
Data Sources:
Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset, Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset, Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface, Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled, Saxo/ko-news-corpus-1, Saxo/ko-news-corpus-2, Saxo/ko-news-corpus-3, Saxo/ko-news-corpus-4, Saxo/ko-news-corpus-5, Saxo/ko-news-corpus-6, Saxo/ko-news-corpus-7, Saxo/ko-news-corpus-8, Saxo/ko-news-corpus-9, maywell/ko_Ultrafeedback_binarized, youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo, lilacai/glaive-function-calling-v2-sharegpt, kuotient/gsm8k-ko
Data Volume:
10M Korean news corpus
Methodology:
Continued-Pretraining (CPT), Supervised Fine-Tuning (SFT), Decision Point Optimization (DPO)
Context Length:
128000
Hardware Used:
8x H100-80G GPUs
Model Architecture:
gemma-2-27b-it based architecture with specialized cross-language training techniques
LLM NameLinkbricks Horizon AI Korean Advanced 27B
Repository ๐Ÿค—https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Korean-Advanced-27B 
Base Model(s)  Gemma 2 27B It   google/gemma-2-27b-it
Model Size27b
Required VRAM54.7 GB
Updated2025-02-10
MaintainerSaxo
Model Typegemma2
Model Files  16.6 GB   19.4 GB   22.3 GB   28.9 GB   4.7 GB: 1-of-12   4.9 GB: 2-of-12   4.9 GB: 3-of-12   5.0 GB: 4-of-12   4.9 GB: 5-of-12   4.9 GB: 6-of-12   5.0 GB: 7-of-12   4.9 GB: 8-of-12   4.9 GB: 9-of-12   5.0 GB: 10-of-12   4.9 GB: 11-of-12   0.7 GB: 12-of-12
Supported Languagesko en jp cn
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureGemma2ForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.43.2
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Torch Data Typebfloat16

Best Alternatives to Linkbricks Horizon AI Korean Advanced 27B

Best Alternatives
Context / RAM
Downloads
Likes
Gemma 2 27B It GGUF8K / 10.4 GB86027
Gemma 2 27B It GGUF8K / 10.4 GB6370
Gemma 2 27B It 4bit8K / 15.3 GB1540598
Gemma 2 27B It Bnb 4bit8K / 15.8 GB562311
Gemma2 Mixed Therapy Ft8K / 15.8 GB71
Gemma 2 27B Bnb 4bit8K / 15.8 GB277514
GEMMA2 27B NLI 16bit8K / 54.7 GB70
Gemma 2 27B It 8bit8K / 28.8 GB16310
Gemma 2 27B 8bit8K / 28.8 GB432
Gemma 2 27B 4bit8K / 15.3 GB420
Note: green Score (e.g. "73.2") means that the model is better than Saxo/Linkbricks-Horizon-AI-Korean-Advanced-27B.

Rank the Linkbricks Horizon AI Korean Advanced 27B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42935 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227