Llama 3.2 1B by meta-llama

 ยป  All LLMs  ยป  meta-llama  ยป  Llama 3.2 1B   URL Share it on

  Arxiv:2204.05149   Arxiv:2405.16406   Autotrain compatible   De   En   Endpoints compatible   Es   Facebook   Fr   Hi   It   Llama   Llama-3   Meta   Pt   Pytorch   Region:us   Safetensors   Th
Model Card on HF ๐Ÿค—: https://huggingface.co/meta-llama/Llama-3.2-1B 

Llama 3.2 1B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.2 1B (meta-llama/Llama-3.2-1B)

Llama 3.2 1B Parameters and Internals

Model Type 
Multilingual, Generative, Instruction-tuned
Use Cases 
Areas:
Commercial, Research
Applications:
Agentic retrieval, Summarization tasks, Knowledge retrieval, Prompt rewriting
Primary Use Cases:
Multilingual dialogue and assistant-based tasks
Considerations:
Deployments, including those in additional languages, must adhere to safety guidelines and responsible use principles.
Additional Notes 
This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
Supported Languages 
English (official), German (official), French (official), Italian (official), Portuguese (official), Hindi (official), Spanish (official), Thai (official)
Training Details 
Data Sources:
A new mix of publicly available online data
Data Volume:
Up to 9 trillion tokens
Context Length:
128000
Hardware Used:
Meta's custom built GPU cluster
Model Architecture:
Auto-regressive with an optimized transformer architecture
Responsible Ai Considerations 
Mitigation Strategies:
Llama 3.2 was developed following best practices outlined in Meta's Responsible Use Guide.
Input Output 
Input Format:
Text
Accepted Modalities:
Multilingual Text
Output Format:
Text and code
LLM NameLlama 3.2 1B
Repository ๐Ÿค—https://huggingface.co/meta-llama/Llama-3.2-1B 
Model Size1b
Required VRAM2.5 GB
Updated2024-12-21
Maintainermeta-llama
Model Typellama
Model Files  2.5 GB
Supported Languagesen de fr it pt hi es th
Model ArchitectureLlamaForCausalLM
Licensellama3.2
Context Length131072
Model Max Length131072
Transformers Version4.45.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Quantized Models of the Llama 3.2 1B

Model
Likes
Downloads
VRAM
Llama 3.2 1B Bnb 4bit8659521 GB
Mlx Llama 3.2 1B Q80132 GB

Best Alternatives to Llama 3.2 1B

Best Alternatives
Context / RAM
Downloads
Likes
LWM Text Chat 1M1024K / 13.5 GB1028176
LWM Text 1M1024K / 13.5 GB159828
JOSIE 1M Base1024K / 13.5 GB121
JOSIE 1M Base1024K / 13.5 GB61
Llama 3.2 1B Instruct128K / 2.5 GB1873115654
Llama 3.2 1B Instruct128K / 2.5 GB7411645
Llama 3.2 1B128K / 2.5 GB12596822
FastLlama 3.2 1B Instruct128K / 4.9 GB10458
Llama 3.2 SUN HDIC 1B Instruct128K / 3 GB790
Llama 3.2 1B128K / 2.5 GB1862112

Rank the Llama 3.2 1B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217