TinyLlama 1.1B Chat V1.0 Bpw4 EXL2 by blockblockblock

 ยป  All LLMs  ยป  blockblockblock  ยป  TinyLlama 1.1B Chat V1.0 Bpw4 EXL2   URL Share it on

  4-bit   Autotrain compatible   Conversational   Dataset:bigcode/starcoderdata Dataset:cerebras/slimpajama-62... Dataset:huggingfaceh4/ultracha... Dataset:huggingfaceh4/ultrafee...   En   Endpoints compatible   Exl2   Llama   Quantized   Region:us

TinyLlama 1.1B Chat V1.0 Bpw4 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
TinyLlama 1.1B Chat V1.0 Bpw4 EXL2 (blockblockblock/TinyLlama-1.1B-Chat-v1.0-bpw4-exl2)

TinyLlama 1.1B Chat V1.0 Bpw4 EXL2 Parameters and Internals

Model Type 
text generation, chatbot
Additional Notes 
The model is finetuned as a chat model and is compatible with projects built upon Llama.
Supported Languages 
en (primary)
Training Details 
Data Sources:
cerebras/SlimPajama-627B, bigcode/starcoderdata, HuggingFaceH4/ultrachat_200k, HuggingFaceH4/ultrafeedback_binarized
Data Volume:
3 trillion tokens
Methodology:
Pretraining and fine-tuning with HF's Zephyr training recipe, and alignment with ๐Ÿค— TRL's DPOTrainer.
Training Time:
90 days
Hardware Used:
16 A100-40G GPUs
Model Architecture:
Adopts Llama 2 architecture and tokenizer
Input Output 
Accepted Modalities:
text
Output Format:
generated text
Performance Tips:
Use with the appropriate tokenizer's chat template for optimal performance.
LLM NameTinyLlama 1.1B Chat V1.0 Bpw4 EXL2
Repository ๐Ÿค—https://huggingface.co/blockblockblock/TinyLlama-1.1B-Chat-v1.0-bpw4-exl2 
Base Model(s)  unrahul/TinyLlama-1.1B-Chat-v1.0-fp4   unrahul/TinyLlama-1.1B-Chat-v1.0-fp4
Model Size1.1b
Required VRAM0.7 GB
Updated2025-02-16
Maintainerblockblockblock
Model Typellama
Model Files  0.7 GB
Supported Languagesen
Quantization Typeexl2
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.35.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to TinyLlama 1.1B Chat V1.0 Bpw4 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...E TinyLlama 1.1B 32K Base 4bit32K / 0.6 GB121
...E TinyLlama 1.1B 32K Base 8bit32K / 1.2 GB111
...1B 32K Instruct 8.0bpw H8 EXL232K / 1.2 GB61
...1B 32K Instruct 3.0bpw H6 EXL232K / 0.5 GB50
Athena TinyLlama V0.116K / 2.2 GB140
...lama PY CODER 4bit Lora 4k V124K / 2.2 GB1250
Tinyllama Coder Py V144K / 2.2 GB690
...icipal Prediction Merged Model4K / 2.2 GB1270
TiamaPY V274K / 2.2 GB680
TiamaPY 1.1B V244K / 2.2 GB750
Note: green Score (e.g. "73.2") means that the model is better than blockblockblock/TinyLlama-1.1B-Chat-v1.0-bpw4-exl2.

Rank the TinyLlama 1.1B Chat V1.0 Bpw4 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227