Llama 2 70B Chat GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Llama 2 70B Chat GGUF   URL Share it on

  Arxiv:2307.09288 Base model:meta-llama/llama-2-... Base model:quantized:meta-llam...   En   Facebook   Gguf   Llama   Llama2   Meta   Pytorch   Quantized   Region:us

Llama 2 70B Chat GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 2 70B Chat GGUF (TheBloke/Llama-2-70B-Chat-GGUF)

Llama 2 70B Chat GGUF Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
Commercial, Research
Applications:
Assistant-like chat, Variety of natural language generation tasks
Limitations:
May produce inaccurate or biased outputs, Tested only in English
Considerations:
Developers should perform safety testing and tuning tailored to their specific applications.
Supported Languages 
English (proficient)
Training Details 
Data Sources:
Publicly available online data
Data Volume:
2 trillion tokens
Methodology:
Supervised fine-tuning and reinforcement learning with human feedback
Hardware Used:
Meta's Research Super Cluster and production clusters
Model Architecture:
Optimized transformer architecture
Safety Evaluation 
Ethical Considerations:
Model may produce inaccurate, biased, or objectionable responses. Safety testing is required before deployment.
Input Output 
Input Format:
Text
Accepted Modalities:
text
Output Format:
Text
Release Notes 
Notes:
The model is fine-tuned for dialogue use cases.
LLM NameLlama 2 70B Chat GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/Llama-2-70B-Chat-GGUF 
Model NameLlama 2 70B Chat
Model CreatorMeta Llama 2
Base Model(s)  Llama 2 70B Chat Hf   meta-llama/Llama-2-70b-chat-hf
Model Size70b
Required VRAM29.3 GB
Updated2024-12-22
MaintainerTheBloke
Model Typellama
Model Files  29.3 GB   36.1 GB   33.2 GB   29.9 GB   38.9 GB   41.4 GB   39.1 GB   47.5 GB   48.8 GB   30.6 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Llama 2 70B Chat GGUF

Best Alternatives
Context / RAM
Downloads
Likes
CodeLlama 70B Instruct GGUF0K / 25.5 GB411456
CodeLlama 70B Python GGUF0K / 25.5 GB364840
...gekit Passthrough Yqhuxcv GGUF0K / 16.9 GB1050
Meta Llama 3 70B Instruct GGUF0K / 26.4 GB2463
CodeLlama 70B Hf GGUF0K / 25.5 GB109543
KafkaLM 70B German V0.1 GGUF0K / 25.5 GB86823
DAD Model V2 70B Q40K / 42.5 GB50
Llama 2 70B Guanaco QLoRA GGUF0K / 29.3 GB430
GOAT 70B Storytelling GGUF0K / 29.3 GB258612
Meditron 70B GGUF0K / 29.3 GB71719
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Llama-2-70B-Chat-GGUF.

Rank the Llama 2 70B Chat GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217