LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Polyglot Ko 12.8B Safetensors by beomi

What open-source LLMs or SLMs are you in search of? 18857 in total.

 ยป  All LLMs  ยป  beomi  ยป  Polyglot Ko 12.8B Safetensors   URL Share it on

  Autotrain compatible   Endpoints compatible   Gpt neox   Ko   License:apache-2.0   Polyglot-ko   Region:us   Safetensors   Sharded   Tensorflow

Rank the Polyglot Ko 12.8B Safetensors Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Polyglot Ko 12.8B Safetensors (beomi/polyglot-ko-12.8b-safetensors)

Quantized Models of the Polyglot Ko 12.8B Safetensors

...glot Ko 12.8B Safetensors 8bit1813 GB

Best Alternatives to Polyglot Ko 12.8B Safetensors

Best Alternatives
HF Rank
Polyglot Ko 12.8B Instruct2K / 0.2 GB13050
Polyglot Ko 12.8B2K / 25.8 GB984673
KoAlpaca Polyglot 12.8B2K / 25.8 GB324549
Finance 12.8B 5e2K / 25.8 GB21
Kullm Polyglot 12.8B V22K / 25.9 GB280646
...t Ko 12.8B Chang Instruct Chat2K / 25.9 GB127015
Kullm Polyglot 12.8B V32K / 25.9 GB1014
Polyglot Ko 12.8B Instruct2K / 25.9 GB13033
Kyujin CoTy Platypus Ko 12.8B2K / 25.9 GB13062
Kyujin Poly Platypus Ko 12.8B2K / 25.9 GB13032

Polyglot Ko 12.8B Safetensors Parameters and Internals

LLM NamePolyglot Ko 12.8B Safetensors
RepositoryOpen on ๐Ÿค— 
Model Size12.8b
Required VRAM25.8 GB
Model Typegpt_neox
Model Files  0.9 GB: 1-of-28   0.8 GB: 2-of-28   0.8 GB: 3-of-28   1.0 GB: 4-of-28   0.9 GB: 5-of-28   1.0 GB: 6-of-28   0.9 GB: 7-of-28   1.0 GB: 8-of-28   0.9 GB: 9-of-28   1.0 GB: 10-of-28   0.9 GB: 11-of-28   1.0 GB: 12-of-28   0.9 GB: 13-of-28   1.0 GB: 14-of-28   0.9 GB: 15-of-28   1.0 GB: 16-of-28   0.9 GB: 17-of-28   1.0 GB: 18-of-28   0.9 GB: 19-of-28   1.0 GB: 20-of-28   0.9 GB: 21-of-28   1.0 GB: 22-of-28   0.9 GB: 23-of-28   1.0 GB: 24-of-28   0.9 GB: 25-of-28   1.0 GB: 26-of-28   0.9 GB: 27-of-28   0.5 GB: 28-of-28
Supported Languagesko
Model ArchitectureGPTNeoXForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.29.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|endoftext|>
Vocabulary Size30080
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003