Mxbai Rerank Large V2 by mixedbread-ai

 ยป  All LLMs  ยป  mixedbread-ai  ยป  Mxbai Rerank Large V2   URL Share it on

  Af   Am   Ar   As   Autotrain compatible   Az   Be   Bg   Bn   Br   Bs   Ca   Cs   Cy   Da   De   El   En   Endpoints compatible   Eo   Es   Et   Eu   Fa   Ff   Fi   Fr   Fy   Ga   Gd   Gl   Gn   Gu   Ha   He   Hi   Hr   Ht   Hu   Hy   Id   Ig   Is   It   Ja   Jv   Ka   Kk   Km   Kn   Ko   Ku   Ky   La   Lg   Li   Ln   Lo   Lt   Lv   Mg   Mk   Ml   Mn   Mr   Ms   My   Ne   Nl   No   Ns   Om   Or   Pa   Pl   Ps   Pt   Qu   Qwen2   Region:us   Rm   Ro   Ru   Sa   Safetensors   Sc   Sd   Si   Sk   Sl   So   Sq   Sr   Ss   Su   Sv   Sw   Ta   Te   Text-ranking   Th   Tl   Tn   Tr   Ug   Uk   Ur   Uz   Vi   Wo   Xh   Yi   Yo   Zh   Zu

Mxbai Rerank Large V2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mxbai Rerank Large V2 (mixedbread-ai/mxbai-rerank-large-v2)

Mxbai Rerank Large V2 Parameters and Internals

LLM NameMxbai Rerank Large V2
Repository ๐Ÿค—https://huggingface.co/mixedbread-ai/mxbai-rerank-large-v2 
Model Size1.5b
Required VRAM3.1 GB
Updated2025-06-01
Maintainermixedbread-ai
Model Typeqwen2
Model Files  3.1 GB
Supported Languagesen zh de ja ko es fr ar bn ru id sw te th
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.49.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Mxbai Rerank Large V2

Best Alternatives
Context / RAM
Downloads
Likes
ReaderLM V2500K / 3.1 GB44161641
Reader Lm 1.5B250K / 3.1 GB1001597
DeepSeek R1 Distill Qwen 1.5B128K / 3.5 GB13040971204
DeepScaleR 1.5B Preview128K / 7.1 GB92075556
Qwen2.5 1.5B128K / 3.1 GB567731103
AceInstruct 1.5B128K / 3.5 GB582620
OpenMath Nemotron 1.5B128K / 3.1 GB588620
DeepCoder 1.5B Preview128K / 7.1 GB179164
Qwen2 1.5B128K / 3.1 GB10456492
ZR1 1.5B128K / 7.1 GB87865
Note: green Score (e.g. "73.2") means that the model is better than mixedbread-ai/mxbai-rerank-large-v2.

Rank the Mxbai Rerank Large V2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47770 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227