Rank Vicuna 7B V1 by castorini

 ยป  All LLMs  ยป  castorini  ยป  Rank Vicuna 7B V1   URL Share it on

  Arxiv:2307.09288   Arxiv:2309.15088   Autotrain compatible   En   Information retrieval   Llama   Pytorch   Region:us   Reranker   Sharded

Rank Vicuna 7b V1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Rank Vicuna 7B V1 (castorini/rank_vicuna_7b_v1)

Rank Vicuna 7B V1 Parameters and Internals

Model Type 
auto-regressive, language model
Use Cases 
Areas:
research
Additional Notes 
Trained with data augmentation.
Supported Languages 
en (proficient)
Training Details 
Methodology:
supervised instruction fine-tuning
Model Architecture:
transformer architecture
LLM NameRank Vicuna 7b V1
Repository ๐Ÿค—https://huggingface.co/castorini/rank_vicuna_7b_v1 
Model Size7b
Required VRAM13.5 GB
Updated2024-12-26
Maintainercastorini
Model Typellama
Model Files  10.0 GB: 1-of-2   3.5 GB: 2-of-2
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.31.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Quantized Models of the Rank Vicuna 7B V1

Model
Likes
Downloads
VRAM
Rank Vicuna 7b V1 Fp163115313 GB

Best Alternatives to Rank Vicuna 7B V1

Best Alternatives
Context / RAM
Downloads
Likes
...1M 1000000ctx AEZAKMI 3 1 17021024K / 13.5 GB831
... Qwen2.5llamaify 7B V23.1 200K195K / 15.2 GB28320
SuperNeuralDreadDevil 8B128K / 16.1 GB571
Yarn Llama 2 7B 128K128K / 13.5 GB258640
LLaMA 7B PoSE YaRN 128K128K / 13.5 GB123
LLaMA 7B PoSE Linear 96K96K / 27 GB122
LLaMA 7B PoSE YaRN 96K96K / 13.5 GB131
Chat Llama2 7B 80K80K / 13.8 GB270
Llama2 7B 80K80K / 13.8 GB260
Lloma Step40064K / 13.5 GB790
Note: green Score (e.g. "73.2") means that the model is better than castorini/rank_vicuna_7b_v1.

Rank the Rank Vicuna 7B V1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40248 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217