Llama 3.1 Minitron 4B Depth Base by nvidia

 ยป  All LLMs  ยป  nvidia  ยป  Llama 3.1 Minitron 4B Depth Base   URL Share it on

  Arxiv:2009.03300   Arxiv:2407.14679   Arxiv:2408.11796   Autotrain compatible   En   Endpoints compatible   Llama   Llama-3   Nemo   Nvidia   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Llama 3.1 Minitron 4B Depth Base Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.1 Minitron 4B Depth Base (nvidia/Llama-3.1-Minitron-4B-Depth-Base)

Llama 3.1 Minitron 4B Depth Base Parameters and Internals

Model Type 
text-to-text
Use Cases 
Areas:
commercial, natural language generation tasks
Applications:
code generation, language understanding
Primary Use Cases:
5-shot performance, zero-shot performance, code generation
Limitations:
Model trained on potentially biased or toxic data, May amplify biases or return toxic responses
Considerations:
Developers should ensure model meets their industry's requirements
Supported Languages 
English (Proficient), Multilingual (Proficient)
Training Details 
Data Sources:
webpages, dialogue, articles, written materials
Data Volume:
94 billion tokens
Methodology:
Knowledge Distillation
Training Time:
July 29, 2024 - Aug 3, 2024
Model Architecture:
Transformer Decoder (Auto-Regressive Language Model)
Responsible Ai Considerations 
Fairness:
The model may contain societal biases.
Transparency:
Not explicitly stated
Accountability:
Recommended to be managed by user's internal model team
Mitigation Strategies:
Policies to develop Trustworthy AI, report vulnerabilities
Input Output 
Input Format:
String
Accepted Modalities:
Text
Output Format:
String
Performance Tips:
Works well within 8k characters or less
LLM NameLlama 3.1 Minitron 4B Depth Base
Repository ๐Ÿค—https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Depth-Base 
Model Size4b
Required VRAM9.1 GB
Updated2025-03-12
Maintainernvidia
Model Typellama
Model Files  5.0 GB: 1-of-2   4.1 GB: 2-of-2
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length131072
Model Max Length131072
Transformers Version4.43.3
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3.1 Minitron 4B Depth Base

Best Alternatives
Context / RAM
Downloads
Likes
SJT 4B146K / 7.6 GB290
Nemotron W 4b MagLight 0.1128K / 9.2 GB553
Nemotron W 4b Halo 0.1128K / 9.2 GB503
Loxa 4B128K / 16 GB760
Aura 4B128K / 9 GB3510
...ama 3.1 Minitron 4B Width Base128K / 9 GB3361188
....5 MINI 4B SFTxORPO HESSIAN AI128K / 7.7 GB160
....5 MINI 4B ORPOxSFT HESSIAN AI128K / 7.7 GB150
....5 MINI 4B ORPOxSFT HESSIAN AI128K / 7.7 GB150
....5 MINI 4B SFTxORPO HESSIAN AI128K / 7.7 GB130
Note: green Score (e.g. "73.2") means that the model is better than nvidia/Llama-3.1-Minitron-4B-Depth-Base.

Rank the Llama 3.1 Minitron 4B Depth Base Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 44887 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227