Nemotron Mini 4B Instruct by nvidia

 ยป  All LLMs  ยป  nvidia  ยป  Nemotron Mini 4B Instruct   URL Share it on

  Arxiv:2402.16819   Arxiv:2407.14679 Base model:finetune:nvidia/min... Base model:nvidia/minitron-4b-...   En   Instruct   Nemo   Nemotron   Pytorch   Region:us

Nemotron Mini 4B Instruct Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Nemotron Mini 4B Instruct (nvidia/Nemotron-Mini-4B-Instruct)

Nemotron Mini 4B Instruct Parameters and Internals

Model Type 
roleplaying, retrieval augmented generation, function calling
Use Cases 
Areas:
Roleplay, RAG QA, Function calling
Primary Use Cases:
Improved game character roleplay
Limitations:
May generate toxic, biased, or inaccurate responses
Considerations:
Use recommended prompt template to mitigate issues.
Additional Notes 
Model ready for commercial use. Integrated with NVIDIA ACE.
Supported Languages 
languages_supported (en), proficiency_level (N/A)
Training Details 
Methodology:
Distillation, pruning, and quantization
Context Length:
4096
Training Time:
Feb 2024 - Aug 2024
Model Architecture:
Nemotron-4
Safety Evaluation 
Methodologies:
Garak, AEGIS, Human Content Red Teaming
Risk Categories:
prompt injection, data leakage, 13 categories of critical risks
Ethical Considerations:
Efforts to mitigate vulnerabilities and safety risks through multiple evaluation methods.
Responsible Ai Considerations 
Fairness:
Model may contain biases from training data.
Accountability:
Shared responsibility for Trustworthy AI development.
Mitigation Strategies:
Encourage developers to ensure model meets industry standards.
LLM NameNemotron Mini 4B Instruct
Repository ๐Ÿค—https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct 
Base Model(s)  Minitron 4B Base   nvidia/Minitron-4B-Base
Model Size4b
Required VRAM8.4 GB
Updated2024-12-22
Maintainernvidia
Model Typenemotron
Instruction-BasedYes
Model Files  8.4 GB
Supported Languagesen
Model ArchitectureNemotronForCausalLM
Licenseother
Context Length4096
Model Max Length4096
Transformers Version4.32.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size256000
Torch Data Typebfloat16

Rank the Nemotron Mini 4B Instruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217