LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Llama 2 Ko Instruct 13B by daekeun-ml

What open-source LLMs or SLMs are you in search of? 18857 in total.

 ยป  All LLMs  ยป  daekeun-ml  ยป  Llama 2 Ko Instruct 13B   URL Share it on

  Autotrain compatible   Dataset:beomi/koalpaca-v1.1a Dataset:kyujinpy/kopen-platypu...   Endpoints compatible   Has space   Instruct   Instruction   Ko   License:llama2   Llama   Llama2   Region:us   Safetensors   Sharded   Tensorflow

Rank the Llama 2 Ko Instruct 13B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Llama 2 Ko Instruct 13B (daekeun-ml/Llama-2-ko-instruct-13B)

Best Alternatives to Llama 2 Ko Instruct 13B

Best Alternatives
HF Rank
Solarized 13B DPO62.054K / 24.9 GB13061
Speechless Llama2 13B61.364K / 26.7 GB24504
Trurl 2 13B Pl Instruct Unload58.444K / 26 GB28056
GenAI Llama 2 13B58.174K / 26 GB39134
...struct Llama2 Koen 13B V0.9.2456.982K / 26.3 GB33880
SOLAR 13B Instruct V1.056.654K / 25 GB13481
Mythalion 13B56.484K / 26 GB5933119
...ga 13B Instruct PL Lora Unload56.244K / 26 GB27791
Model 007 13b V255.414K / 26 GB8434
Vicuna 13B V1.5 PL Lora Unload55.244K / 26 GB27881
Note: green Score (e.g. "73.2") means that the model is better than daekeun-ml/Llama-2-ko-instruct-13B.

Llama 2 Ko Instruct 13B Parameters and Internals

LLM NameLlama 2 Ko Instruct 13B
RepositoryOpen on ๐Ÿค— 
Model Size13b
Required VRAM26.2 GB
Model Typellama
Model Files  1.9 GB: 1-of-14   1.9 GB: 2-of-14   1.9 GB: 3-of-14   1.9 GB: 4-of-14   1.9 GB: 5-of-14   1.9 GB: 6-of-14   1.9 GB: 7-of-14   1.9 GB: 8-of-14   1.9 GB: 9-of-14   1.9 GB: 10-of-14   1.9 GB: 11-of-14   1.9 GB: 12-of-14   1.9 GB: 13-of-14   1.5 GB: 14-of-14
Supported Languagesko
Model ArchitectureLlamaForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.34.1
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size46336
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003