What open-source LLMs or SLMs are you in search of? 45881 in total.
Subscribe to LLM updates
Open-Source Language Model Lists by Categories
All Large Language Models LMSYS ChatBot Arena ELO OpenLLM LeaderBoard v1 OpenLLM LeaderBoard v2 Original & Foundation LLMs OpenCompass LeaderBoard Recently Added Models Code Generating Models Instruction-Based LLMs LLMs Fit in 4GB RAM LLMs Fit in 8GB RAM LLMs Fit in 12GB RAM LLMs Fit in 24GB RAM LLMs Fit in 32GB RAM GGUF Quantized Models GPTQ Quantized Models EXL2 Quantized Models Fine-Tuned Models LLMs for Commercial Use TheBloke's Models Context Size >16K Tokens Mixture-Of-Experts Models Apple's MLX LLMs Small Language ModelsRecent LLM List
Model | Downloads | Likes | VRAM |
---|---|---|---|
DeepSeek V3.0324 deepseek-ai | 86556 | 2190 | 184 |
DeepSeek V3.0324 GGUF unsloth | 118509 | 135 | 0 |
Openhands Lm 32B V0.1 all-hands | 420 | 129 | 65 |
DeepSeek V3.0324 AWQ cognitivecomputations | 5407 | 10 | 351 |
DeepSeek V3.0324 4bit mlx-community | 689 | 29 | 198 |
...eus 3B 0.1 Ft Unsloth Bnb 4bit unsloth | 10462 | 2 | 2 |
Qwen2.5 0.5B Instruct Gensyn | 2992 | 0 | 1 |
....1 Pretrained Unsloth Bnb 4bit unsloth | 2022 | 2 | 2 |
Orpheus 3B 0.1 Ft unsloth | 1265 | 1 | 6 |
Tessa T1 32B Tesslate | 62 | 16 | 65 |
TOP 7b Param LLMs
Model | Downloads | Likes | VRAM |
---|---|---|---|
Qwen2.5 7B Instruct 1M Qwen | 1041416 | 288 | 15 |
Janus Pro 7B deepseek-ai | 307857 | 3276 | 14 |
DeepSeek R1 Distill Qwen 7B deepseek-ai | 1467895 | 584 | 15 |
Qwen2.5 7B Instruct Qwen | 2160764 | 609 | 15 |
Mistral 7B Instruct V0.2 mistralai | 3316095 | 2630 | 14 |
OpenThinker 7B open-thoughts | 33346 | 129 | 15 |
Gemma 7B It google | 452890 | 1160 | 17 |
Zephyr 7B Beta HuggingFaceH4 | 625812 | 1677 | 14 |
Falcon3 7B Instruct tiiuae | 61602 | 66 | 14 |
Llama 2 7B Chat Hf meta-llama | 1271600 | 4347 | 13 |
TOP 34b Param LLMs
Model | Downloads | Likes | VRAM |
---|---|---|---|
Yi 1.5 34B Chat 01-ai | 36970 | 270 | 69 |
Yi 34B Chat 01-ai | 7425 | 350 | 69 |
Yi 1.5 34B 32K 01-ai | 5895 | 36 | 69 |
Yi 34B 01-ai | 7027 | 1293 | 68 |
Yi 1.5 34B Chat 16K 01-ai | 1716 | 27 | 69 |
Blossom V5.1 34B Azure99 | 1386 | 5 | 68 |
YiSM Blossom5.1 34B SLERP CombinHorizon | 75 | 0 | 69 |
CodeLlama 34B Instruct Hf codellama | 23791 | 283 | 67 |
Yislerp2 34B allknowingroger | 13 | 0 | 69 |
Magnum V3 34B anthracite-org | 49 | 29 | 69 |
LLMs Fit 16GB VRAM
Model | Downloads | Likes | VRAM |
---|---|---|---|
Phi 4 Mini Instruct microsoft | 338344 | 414 | 7 |
Phi 4 Multimodal Instruct microsoft | 993019 | 1257 | 11 |
Ministral 8B Instruct 2410 mistralai | 158225 | 465 | 16 |
Qwen2.5 7B Instruct 1M Qwen | 1041416 | 288 | 15 |
Granite 3.1 8B Instruct ibm-granite | 74336 | 154 | 16 |
DeepSeek R1 Distill Qwen 1.5B deepseek-ai | 1784482 | 1110 | 3 |
Janus Pro 7B deepseek-ai | 307857 | 3276 | 14 |
Llama 3.1 8B Instruct meta-llama | 6143095 | 3798 | 16 |
DeepSeek R1 Distill Qwen 7B deepseek-ai | 1467895 | 584 | 15 |
Meta Llama 3.1 8B Instruct meta-llama | 3087763 | 2600 | 16 |
TOP Code Generating LLMs
Model | Downloads | Likes | VRAM |
---|---|---|---|
Qwen2.5 Coder 32B Instruct Qwen | 456943 | 1750 | 65 |
Openhands Lm 32B V0.1 all-hands | 420 | 129 | 65 |
ZYH LLM Qwen2.5 14B V4 YOYO-AI | 832 | 5 | 29 |
Qwen2.5 Coder 14B Instruct Qwen | 116186 | 97 | 29 |
Qwen2.5 Coder 7B Instruct Qwen | 271096 | 449 | 15 |
OlympicCoder 7B open-r1 | 6924 | 157 | 15 |
Qwen2.5 Coder 32B Qwen | 37946 | 112 | 65 |
OlympicCoder 32B open-r1 | 3950 | 144 | 65 |
Qwen2.5 14B YOYO V4 YOYO-AI | 421 | 4 | 29 |
Viper Coder V1.7 Vsm6 prithivMLmods | 494 | 5 | 29 |
Open-Source Language Model Lists by Benchmarks
LMSys ELOLMSys Chatbot Arena ELO RatingMMLU Pro
Mass. Multitask Language Understanding ProGPQA
General Purpose Question AnsweringMUSR
Multi-task Unified Skill RepresentationMATH Lvl 5
Mathematics Level 5BBH
Big Bench HardIFEval
Instruction Following EvaluationARC
Common Sense Reasoning, AccuracyHellaSwag
Sentence Completion, AccuracyMMLU
Multi-task Language Understanding, AverageTruthfulQA
Whether a model is truthful in generating answersWinoGrande
Common Sense Reasoning, AccuracyGSM8K
Arithmetic Reasoning, AccuracyHumanEval AVG
HumanEval for Multiple Languages, Pass@1Python Eval
HumanEval for Python, Pass@1Java Eval
HumanEval for Java, Pass@1Javascript Eval
HumanEval for Javascript, Pass@1CPP Eval
HumanEval for CPP, Pass@1PHP Eval
HumanEval for PHP, Pass@1Julia Eval
HumanEval for Julia, Pass@1D Eval
HumanEval for D, Pass@1Lua Eval
HumanEval for Lua, Pass@1R Eval
HumanEval for R, Pass@1Racket Eval
HumanEval for Racket, Pass@1Rust Eval
HumanEval for Rust, Pass@1Swift Eval
HumanEval for Swift, Pass@1OpenCompass Avg
OpenCompass Average ScoreExam
OpenCompass Examination ScoreLanguage
OpenCompass Language CapabilitiesKnowledge
OpenCompass Knowledge CapabilitiesUnderstanding
OpenCompass Understanding CapabilitiesReasoning
OpenCompass Reasoning CapabilitiesMTBench
Machine Translation, BLEUMATH Lvl 5
Mathematics Level 5LLME Score
LLM Explorer Score
About LLM Explorer
LLM Explorer: A platform connecting over 40,000 AI and ML professionals every month with the most recent Large Language Models, 45881 total. Offering an extensive collection of both large and small models, it's the go-to resource for the latest in AI advancements. With intuitive categorization, powerful analytics, and up-to-date benchmarks, it simplifies the search for the perfect language model for any business application. Whether you're an AI enthusiast, researcher, or industry professional, LLM Explorer is your essential guide to the dynamic landscape of language models.
Summary Statistics
Total LLMs | 45881 |
Quantized LLMs | 11611 |
Merged LLMs | 7952 |
Finetuned LLMs | 890 |
Instruction-Based LLMs | 9621 |
Codegen LLMs | 1315 |
DB Last Update | 2025-04-02 |
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227