LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Llama 2 Ko 7B Chat Vicuna Hf 4bit by quantumaikr

What open-source LLMs or SLMs are you in search of? 18857 in total.

 ยป  All LLMs  ยป  quantumaikr  ยป  Llama 2 Ko 7B Chat Vicuna Hf 4bit   URL Share it on

  4bit   Adapter   Finetuned   Lora   Quantized   Region:us   Tensorboard

Rank the Llama 2 Ko 7B Chat Vicuna Hf 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Llama 2 Ko 7B Chat Vicuna Hf 4bit (quantumaikr/llama-2-ko-7b-chat-vicuna-hf-4bit)

Best Alternatives to Llama 2 Ko 7B Chat Vicuna Hf 4bit

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Mistral 7B Orca DPO 2h75.480K / 0.1 GB02
Mistral 7B Sumz DPO 3h75.430K / 0.1 GB01
Mistral 7B Orca DPO 4h75.420K / 0.1 GB01
... 7B Instruct V0.2 Summ DPO Ed275.340K / 0.1 GB01
... 7B Instruct V0.2 Summ DPO Ed375.340K / 0.1 GB01
Mistral 7B Instruct Adapt V0.275.30K / 0.1 GB01
Grindin72.180K / 0.2 GB60
... Instruct V0.2 Summ Sft DPO E265.950K / 0.1 GB02
Zephyr 7B DPO Qlora63.510K / 0.1 GB6537
Birbal 7B V162.60K / 0 GB94
Note: green Score (e.g. "73.2") means that the model is better than quantumaikr/llama-2-ko-7b-chat-vicuna-hf-4bit.

Llama 2 Ko 7B Chat Vicuna Hf 4bit Parameters and Internals

LLM NameLlama 2 Ko 7B Chat Vicuna Hf 4bit
RepositoryOpen on ๐Ÿค— 
Model Size7b
Required VRAM0.1 GB
Updated2024-02-28
Maintainerquantumaikr
Model Files  0.1 GB   0.0 GB
Quantization Type4bit
Model ArchitectureAdapter
Is Biasednone
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|v_proj
LoRA Alpha8
LoRA Dropout0.1
R Param32
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003