What open-source LLMs or SLMs are you in search of? 40923 in total.
Subscribe to LLM updates
Open-Source Language Model Lists by Categories
All Large Language Models LMSYS ChatBot Arena ELO OpenLLM LeaderBoard v1 OpenLLM LeaderBoard v2 Original & Foundation LLMs OpenCompass LeaderBoard Recently Added Models Code Generating Models Instruction-Based LLMs LLMs Fit in 4GB RAM LLMs Fit in 8GB RAM LLMs Fit in 12GB RAM LLMs Fit in 24GB RAM LLMs Fit in 32GB RAM GGUF Quantized Models GPTQ Quantized Models EXL2 Quantized Models Fine-Tuned Models LLMs for Commercial Use TheBloke's Models Context Size >16K Tokens Mixture-Of-Experts Models Apple's MLX LLMs Small Language ModelsRecent LLM List
Show MoreTOP 7b Param LLMs
Model | Downloads | Likes | VRAM |
---|---|---|---|
Mistral 7B Instruct V0.2 mistralai | 4432644 | 2603 | 14 |
Qwen2.5 7B Instruct Qwen | 1556816 | 389 | 15 |
Mistral 7B Instruct V0.3 mistralai | 4194123 | 1220 | 14 |
Gemma 7B It google | 292302 | 1145 | 17 |
Qwen2.5 Coder 7B Instruct Qwen | 107709 | 380 | 15 |
Llama 2 7B Chat Hf meta-llama | 1117996 | 4111 | 13 |
Falcon3 7B Instruct tiiuae | 15654 | 37 | 14 |
Mistral 7B Instruct V0.1 mistralai | 3700696 | 1541 | 14 |
Zephyr 7B Beta HuggingFaceH4 | 231019 | 1633 | 14 |
Qwen2 7B Instruct Qwen | 680883 | 604 | 15 |
TOP 34b Param LLMs
Model | Downloads | Likes | VRAM |
---|---|---|---|
Yi 1.5 34B Chat 01-ai | 9034 | 261 | 69 |
Yi 34B Chat 01-ai | 7259 | 346 | 69 |
Magnum V3 34B anthracite-org | 2534 | 29 | 69 |
YiSM Blossom5.1 34B SLERP CombinHorizon | 2775 | 0 | 69 |
Yi 1.5 34B 32K 01-ai | 6389 | 36 | 69 |
Yi 1.5 34B Chat 16K 01-ai | 6423 | 26 | 69 |
CodeLlama 34B Instruct Hf codellama | 78192 | 282 | 67 |
Yi 34B 01-ai | 4032 | 1288 | 68 |
YiSM 34B 0rn altomek | 2757 | 1 | 67 |
Dolphin 2.9.1 Yi 1.5 34B cognitivecomputations | 3126 | 34 | 69 |
LLMs Fit 16GB VRAM
Model | Downloads | Likes | VRAM |
---|---|---|---|
Ministral 8B Instruct 2410 mistralai | 3584943 | 382 | 16 |
Meta Llama 3.1 8B Instruct meta-llama | 3087763 | 2600 | 16 |
Aya Expanse 8B CohereForAI | 26224 | 311 | 16 |
SmolLM2 1.7B Instruct HuggingFaceTB | 90946 | 456 | 3 |
Phi 3 Mini 4K Instruct microsoft | 552698 | 1102 | 7 |
Llama 3.2 3B Instruct meta-llama | 1449035 | 853 | 6 |
Mistral 7B Instruct V0.2 mistralai | 4432644 | 2603 | 14 |
Meta Llama 3 8B Instruct meta-llama | 1062304 | 3737 | 16 |
Granite 3.0 8B Instruct ibm-granite | 37365 | 197 | 16 |
Llama 3.1 8B Instruct meta-llama | 4795092 | 3380 | 16 |
TOP Code Generating LLMs
Model | Downloads | Likes | VRAM |
---|---|---|---|
Qwen2.5 Coder 32B Instruct Qwen | 342747 | 1406 | 65 |
DeepSeek Coder V2 Instruct deepseek-ai | 164423 | 521 | 387 |
Qwen2.5 Coder 7B Instruct Qwen | 107709 | 380 | 15 |
Qwen2.5 Coder 32B Qwen | 6804 | 93 | 65 |
Qwen2.5 Coder 14B Instruct Qwen | 19234 | 66 | 29 |
Tqwendo 36B nisten | 242 | 7 | 71 |
SqlCoder Qwen2.5 8bit imsanjoykb | 729 | 3 | 0 |
Qwen2.5 Coder 14B Qwen | 5336 | 23 | 29 |
Qwen2.5 Coder 7B Qwen | 20858 | 84 | 15 |
...wen2.5 Coder 32B Instruct 3bit mlx-community | 62 | 3 | 14 |
Open-Source Language Model Lists by Benchmarks
LMSys ELOLMSys Chatbot Arena ELO RatingMMLU Pro
Mass. Multitask Language Understanding ProGPQA
General Purpose Question AnsweringMUSR
Multi-task Unified Skill RepresentationMATH Lvl 5
Mathematics Level 5BBH
Big Bench HardIFEval
Instruction Following EvaluationARC
Common Sense Reasoning, AccuracyHellaSwag
Sentence Completion, AccuracyMMLU
Multi-task Language Understanding, AverageTruthfulQA
Whether a model is truthful in generating answersWinoGrande
Common Sense Reasoning, AccuracyGSM8K
Arithmetic Reasoning, AccuracyHumanEval AVG
HumanEval for Multiple Languages, Pass@1Python Eval
HumanEval for Python, Pass@1Java Eval
HumanEval for Java, Pass@1Javascript Eval
HumanEval for Javascript, Pass@1CPP Eval
HumanEval for CPP, Pass@1PHP Eval
HumanEval for PHP, Pass@1Julia Eval
HumanEval for Julia, Pass@1D Eval
HumanEval for D, Pass@1Lua Eval
HumanEval for Lua, Pass@1R Eval
HumanEval for R, Pass@1Racket Eval
HumanEval for Racket, Pass@1Rust Eval
HumanEval for Rust, Pass@1Swift Eval
HumanEval for Swift, Pass@1OpenCompass Avg
OpenCompass Average ScoreExam
OpenCompass Examination ScoreLanguage
OpenCompass Language CapabilitiesKnowledge
OpenCompass Knowledge CapabilitiesUnderstanding
OpenCompass Understanding CapabilitiesReasoning
OpenCompass Reasoning CapabilitiesMTBench
Machine Translation, BLEUMATH Lvl 5
Mathematics Level 5LLME Score
LLM Explorer Score
About LLM Explorer
LLM Explorer: A platform connecting over 40,000 AI and ML professionals every month with the most recent Large Language Models, 40923 total. Offering an extensive collection of both large and small models, it's the go-to resource for the latest in AI advancements. With intuitive categorization, powerful analytics, and up-to-date benchmarks, it simplifies the search for the perfect language model for any business application. Whether you're an AI enthusiast, researcher, or industry professional, LLM Explorer is your essential guide to the dynamic landscape of language models.
Summary Statistics
Total LLMs | 40923 |
Quantized LLMs | 11207 |
Merged LLMs | 5919 |
Finetuned LLMs | 818 |
Instruction-Based LLMs | 8080 |
Codegen LLMs | 1152 |
DB Last Update | 2025-01-06 |
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227