LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

PavelGPT 7B 128K V0.1 LoRA by evilfreelancer

What open-source LLMs or SLMs are you in search of? 18857 in total.

 ยป  All LLMs  ยป  evilfreelancer  ยป  PavelGPT 7B 128K V0.1 LoRA   URL Share it on

  Adapter   Custom code   Dataset:d0rj/alpaca-cleaned-ru   Dataset:d0rj/gsm8k-ru Dataset:ilyagusev/ru turbo alp... Dataset:ilyagusev/ru turbo alp...   En   Finetuned   Has space   Instruct   License:mit   Lora   Mistral   Peft   Region:us   Ru

Rank the PavelGPT 7B 128K V0.1 LoRA Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
PavelGPT 7B 128K V0.1 LoRA (evilfreelancer/PavelGPT-7B-128K-v0.1-LoRA)

Best Alternatives to PavelGPT 7B 128K V0.1 LoRA

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
... 7B Instruct V0.2 Summ DPO Ed275.340K / 0.1 GB01
... 7B Instruct V0.2 Summ DPO Ed375.340K / 0.1 GB01
Mistral 7B Instruct Adapt V0.275.30K / 0.1 GB01
... Instruct V0.2 Summ Sft DPO E265.950K / 0.1 GB02
Mistral 7B Instruct V0.261.790K / 0.1 GB35
...al Instruct 7B V0.2 ChatAlpaca61.210K / 0.1 GB920
Emissions Extraction Lora57.60K / 0.2 GB490
Zephyr 7B Beta Agent Instruct51.80K / 0.3 GB81
Llama 2 7B Instruct V0.150.940K / 0.3 GB41
Thai Buffala Lora 7B V0.10K / 0 GB19
Note: green Score (e.g. "73.2") means that the model is better than evilfreelancer/PavelGPT-7B-128K-v0.1-LoRA.

PavelGPT 7B 128K V0.1 LoRA Parameters and Internals

LLM NamePavelGPT 7B 128K V0.1 LoRA
RepositoryOpen on ๐Ÿค— 
Model Size7b
Required VRAM0.1 GB
Updated2024-02-28
Maintainerevilfreelancer
Instruction-BasedYes
Model Files  0.1 GB
Supported Languagesru en
Model ArchitectureAdapter
Licensemit
Model Max Length32768
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|v_proj|k_proj|o_proj
LoRA Alpha16
LoRA Dropout0.05
R Param16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003