Llama 3.2 3B Instruct En Hi Merge 200K by taareshg

 ยป  All LLMs  ยป  taareshg  ยป  Llama 3.2 3B Instruct En Hi Merge 200K   URL Share it on

  Merged Model   4bit   Autotrain compatible Base model:finetune:unsloth/ll... Base model:unsloth/llama-3.2-3...   Conversational   En   Endpoints compatible   Instruct   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Trl   Unsloth

Llama 3.2 3B Instruct En Hi Merge 200K Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.2 3B Instruct En Hi Merge 200K (taareshg/Llama-3.2-3B-Instruct-En-Hi-merge-200k)

Llama 3.2 3B Instruct En Hi Merge 200K Parameters and Internals

LLM NameLlama 3.2 3B Instruct En Hi Merge 200K
Repository ๐Ÿค—https://huggingface.co/taareshg/Llama-3.2-3B-Instruct-En-Hi-merge-200k 
Base Model(s)  unsloth/Llama-3.2-3B-Instruct-bnb-4bit   unsloth/Llama-3.2-3B-Instruct-bnb-4bit
Merged ModelYes
Model Size3b
Required VRAM6.5 GB
Updated2025-03-12
Maintainertaareshg
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-2   1.5 GB: 2-of-2
Supported Languagesen
Quantization Type4bit
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.47.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|finetune_right_pad_id|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3.2 3B Instruct En Hi Merge 200K

Best Alternatives
Context / RAM
Downloads
Likes
...2 3B Instruct Unsloth Bnb 4bit128K / 2.4 GB2653186
Llama32 3B En Emo 2000 Stp128K / 2.2 GB420
Llama32 3B En Emo 300 Stp128K / 2.2 GB300
ReasoningCore 3B 0128K / 6.5 GB2152
Llama32 3B En Emo 1000 Stp128K / 2.2 GB130
...ngCore 3B Instruct R01 Reflect128K / 6.5 GB281
Llama32 3B En Emo 5000 Stp128K / 2.2 GB130
Security Llama3.2 3B128K / 6.5 GB580
Llama 3.2 3B Unsloth Bnb 4bit128K / 2.4 GB266342
Llama32 3B En Emo V3128K / 2.2 GB670
Note: green Score (e.g. "73.2") means that the model is better than taareshg/Llama-3.2-3B-Instruct-En-Hi-merge-200k.

Rank the Llama 3.2 3B Instruct En Hi Merge 200K Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 44902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227