EEVE Korean Instruct 10.8B V1.0 AWQ by Copycats

 ยป  All LLMs  ยป  Copycats  ยป  EEVE Korean Instruct 10.8B V1.0 AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:yanolja/eeve-korean...   Conversational   Instruct   Ko   License:apache-2.0   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

EEVE Korean Instruct 10.8B V1.0 AWQ Benchmarks

Rank the EEVE Korean Instruct 10.8B V1.0 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
EEVE Korean Instruct 10.8B V1.0 AWQ (Copycats/EEVE-Korean-Instruct-10.8B-v1.0-AWQ)

Best Alternatives to EEVE Korean Instruct 10.8B V1.0 AWQ

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...EVE Korean Instruct 10.8B V1.066.484K / 21.6 GB1524441
Eeve 4bit Test4K / 6.6 GB60
...Korean Instruct 10.8B V1.0 32K32K / 21.6 GB1035
...orean Instruct 10.8B V1.0 Int44K / 6.1 GB70
EEVE Instruct Math 10.8B4K / 21.6 GB1781
Eeve Leaderboard Inst V1.54K / 21.6 GB12130
Eeve DPO V34K / 21.6 GB12080
Hansoldeco Eeve 10.8B V0.14K / 21.6 GB60
AlgograpV44K / 43.2 GB1280
Eeve Alma Merged4K / 43.2 GB200
Note: green Score (e.g. "73.2") means that the model is better than Copycats/EEVE-Korean-Instruct-10.8B-v1.0-AWQ.

EEVE Korean Instruct 10.8B V1.0 AWQ Parameters and Internals

LLM NameEEVE Korean Instruct 10.8B V1.0 AWQ
RepositoryOpen on ๐Ÿค— 
Base Model(s)  ...EVE Korean Instruct 10.8B V1.0   yanolja/EEVE-Korean-Instruct-10.8B-v1.0
Model Size10.8b
Required VRAM6.1 GB
Updated2024-04-20
MaintainerCopycats
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-2   1.1 GB: 2-of-2
Supported Languagesko
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.38.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size40960
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 35526 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20240042001