Llama 3.1 8B by meta-llama

 ยป  All LLMs  ยป  meta-llama  ยป  Llama 3.1 8B   URL Share it on

  Arxiv:2204.05149   Autotrain compatible   De   En   Endpoints compatible   Es   Facebook   Fr   Hi   It   Llama   Llama-3   Meta   Pt   Pytorch   Region:us   Safetensors   Sharded   Tensorflow   Th
Model Card on HF ๐Ÿค—: https://huggingface.co/meta-llama/Llama-3.1-8B 

Llama 3.1 8B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.1 8B (meta-llama/Llama-3.1-8B)

Llama 3.1 8B Parameters and Internals

Model Type 
text generation, multilingual
Use Cases 
Areas:
Commercial, Research
Applications:
Assistant-like chat, Natural language generation tasks
Primary Use Cases:
Multilingual dialogue, Synthetic data generation
Limitations:
Usage in unsanctioned languages or illegal activities prohibited.
Considerations:
Developers responsible for additional finetuning for unsupported languages.
Supported Languages 
English (High), German (High), French (High), Italian (High), Portuguese (High), Hindi (High), Spanish (High), Thai (High)
Training Details 
Data Sources:
Publicly available online data
Data Volume:
~15 trillion tokens
Methodology:
Supervised fine-tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF)
Context Length:
128000
Training Time:
39.3M GPU hours
Hardware Used:
H100-80GB GPUs, custom-built GPU clusters
Model Architecture:
Auto-regressive transformer architecture with Grouped-Query Attention (GQA)
Safety Evaluation 
Methodologies:
Red teaming, Adversarial evaluation
Findings:
Addressed risks in areas such as CBRNE, child safety, cyber attack enablement
Risk Categories:
Misinformation, Bias, Security threats
Ethical Considerations:
Emphasized responsible use and transparency in deployment.
Responsible Ai Considerations 
Fairness:
Implemented safety fine-tuning to mitigate biases.
Transparency:
Open release to community for evaluation and improvement.
Accountability:
Developers responsible for deployment safety.
Mitigation Strategies:
Use of human feedback and LLM-based classifiers for data quality control.
Input Output 
Input Format:
Text
Accepted Modalities:
Multilingual Text
Output Format:
Text and Code
Release Notes 
Version:
3.1
Date:
2024-07-23
Notes:
Increased contextual length, multilingual expansion, improved safety and performance.
LLM NameLlama 3.1 8B
Repository ๐Ÿค—https://huggingface.co/meta-llama/Llama-3.1-8B 
Model Size8b
Required VRAM16.1 GB
Updated2024-12-21
Maintainermeta-llama
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen de fr it pt hi es th
Model ArchitectureLlamaForCausalLM
Licensellama3.1
Context Length131072
Model Max Length131072
Transformers Version4.43.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3.1 8B

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB5528678
Thor V1.4 8B DARK FICTION1024K / 16.1 GB9412
MrRoboto ProLong 8B V2b1024K / 16.1 GB1770
MrRoboto ProLong 8B V1a1024K / 16.1 GB1070
MrRoboto ProLong 8B V2a1024K / 16.1 GB1000
HEL V0.8 8B LONG DARK1024K / 16.1 GB1840
MrRoboto ProLong 8B V1n1024K / 16.1 GB840
8B Unaligned BASE V2c1024K / 16.1 GB1280
MrRoboto ProLong 8B V2f1024K / 16.1 GB510
MrRoboto ProLong 8B V1f1024K / 16.1 GB630

Rank the Llama 3.1 8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217