LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Vicuna 7B V1.3 Attention Sparsity 30 by wang7776

What open-source LLMs or SLMs are you in search of? 18732 in total.

 ยป  All LLMs  ยป  wang7776  ยป  Vicuna 7B V1.3 Attention Sparsity 30   URL Share it on

  Arxiv:2302.13971   Arxiv:2306.05685   Arxiv:2306.11695   Autotrain compatible   License:apache-2.0   Llama   Region:us   Safetensors   Sharded   Tensorflow

Vicuna 7B V1.3 Attention Sparsity 30 Benchmarks

Rank the Vicuna 7B V1.3 Attention Sparsity 30 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Vicuna 7B V1.3 Attention Sparsity 30 (wang7776/vicuna-7b-v1.3-attention-sparsity-30)

Best Alternatives to Vicuna 7B V1.3 Attention Sparsity 30

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Bagel DPO 7B V0.167.9532K / 14.4 GB225939
Internlm2 7B Llama66.9432K / 15.5 GB15995
Llama2 Init Mistral60.984K / 14.4 GB25510
A I 0xtom 7B Slerp60.4632K / 14.4 GB2580
AIRIC The Mistral59.9532K / 14.4 GB19413
Synatra RP Orca 2 7B V0.159.554K / 13.5 GB30576
Deepseek Llm 7B Chat59.274K / 13.9 GB713758
UltraQwen 7B59.1732K / 15.4 GB17712
...rnlm2 20B Llama 4.0bpw H6 EXL258.532K / 11 GB51
Mistral 7B Guanaco1k Ep258.1332K / 29 GB36423
Note: green Score (e.g. "73.2") means that the model is better than wang7776/vicuna-7b-v1.3-attention-sparsity-30.

Vicuna 7B V1.3 Attention Sparsity 30 Parameters and Internals

LLM NameVicuna 7B V1.3 Attention Sparsity 30
RepositoryOpen on ๐Ÿค— 
Model Size7b
Required VRAM13.5 GB
Updated2024-02-21
Maintainerwang7776
Model Typellama
Model Files  4.9 GB: 1-of-3   5.0 GB: 2-of-3   3.6 GB: 3-of-3
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.36.1
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003