LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Koquality Ko Ref Llama2 7B by DILAB-HYU

What open-source LLMs or SLMs are you in search of? 18732 in total.

 ยป  All LLMs  ยป  DILAB-HYU  ยป  Koquality Ko Ref Llama2 7B   URL Share it on

  Autotrain compatible Base model:hyunseoki/ko-ref-ll...   Dataset:dilab-hyu/koquality   Endpoints compatible   Ko   Koquality   Llama   Pytorch   Region:us   Sharded

Rank the Koquality Ko Ref Llama2 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Koquality Ko Ref Llama2 7B (DILAB-HYU/koquality-ko-ref-llama2-7b)

Best Alternatives to Koquality Ko Ref Llama2 7B

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Bagel DPO 7B V0.167.9532K / 14.4 GB225939
Internlm2 7B Llama66.9432K / 15.5 GB15995
Llama2 Init Mistral60.984K / 14.4 GB25510
A I 0xtom 7B Slerp60.4632K / 14.4 GB2580
AIRIC The Mistral59.9532K / 14.4 GB19413
Synatra RP Orca 2 7B V0.159.554K / 13.5 GB30576
Deepseek Llm 7B Chat59.274K / 13.9 GB713758
UltraQwen 7B59.1732K / 15.4 GB17712
...rnlm2 20B Llama 4.0bpw H6 EXL258.532K / 11 GB51
Mistral 7B Guanaco1k Ep258.1332K / 29 GB36423
Note: green Score (e.g. "73.2") means that the model is better than DILAB-HYU/koquality-ko-ref-llama2-7b.

Koquality Ko Ref Llama2 7B Parameters and Internals

LLM NameKoquality Ko Ref Llama2 7B
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Ko Ref Llama2 7B   hyunseoki/ko-ref-llama2-7b
Model Size7b
Required VRAM27 GB
Updated2024-02-21
MaintainerDILAB-HYU
Model Typellama
Model Files  9.9 GB: 1-of-3   9.9 GB: 2-of-3   7.2 GB: 3-of-3
Supported Languagesko
Model ArchitectureLlamaForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.34.1
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat32
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003