Llama 2 13B GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Llama 2 13B GGUF   URL Share it on

  Arxiv:2307.09288 Base model:meta-llama/llama-2-... Base model:quantized:meta-llam...   En   Facebook   Gguf   Llama   Llama2   Meta   Pytorch   Quantized   Region:us

Llama 2 13B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 2 13B GGUF (TheBloke/Llama-2-13B-GGUF)

Llama 2 13B GGUF Parameters and Internals

Model Type 
llama, text-generation
Use Cases 
Areas:
commercial, research
Primary Use Cases:
assistant-like chat, natural language generation tasks
Considerations:
Developers should perform safety testing and tuning tailored to their specific applications
Additional Notes 
Models outperform open-source chat models on most benchmarks tested.
Training Details 
Data Sources:
publicly available online data
Data Volume:
2 trillion tokens
Methodology:
supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF)
Context Length:
4096
Hardware Used:
Meta's Research Super Cluster, and production clusters, third-party cloud compute
Model Architecture:
auto-regressive language model with an optimized transformer architecture
Responsible Ai Considerations 
Mitigation Strategies:
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
Input Output 
Input Format:
{prompt}
Accepted Modalities:
text
Output Format:
text
LLM NameLlama 2 13B GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/Llama-2-13B-GGUF 
Model NameLlama 2 13B
Model CreatorMeta
Base Model(s)  Llama 2 13B Hf   meta-llama/Llama-2-13b-hf
Model Size13b
Required VRAM5.4 GB
Updated2025-04-30
MaintainerTheBloke
Model Typellama
Model Files  5.4 GB   6.9 GB   6.3 GB   5.7 GB   7.4 GB   7.9 GB   7.4 GB   9.0 GB   9.2 GB   9.0 GB   10.7 GB   13.8 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Llama 2 13B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
MythoMax L2 13B GGUF0K / 5.4 GB44620111
Llama 3 13B Instruct V0.1 GGUF0K / 5.1 GB7995
LLaMa 3 Base Zeroed 13B GGUF0K / 5 GB451
Llama3 13B Ku GGUF0K / 8.7 GB490
Hermes 2 Pro Llama 3 13B GGUF0K / 4.6 GB490
...aMa 3 Instruct Zeroed 13B GGUF0K / 5 GB391
Model10K / 13.8 GB50
Codellama 7B Instruct GGUF0K / 2.8 GB1740
LlaMAndement 13B GGUF0K / 4.8 GB1520
EstopianMaid 13B GGUF0K / 4.8 GB140051
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Llama-2-13B-GGUF.

Rank the Llama 2 13B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46860 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227