TinyLlama 1.1B Step 50K 105B by TinyLlama

 ยป  All LLMs  ยป  TinyLlama  ยป  TinyLlama 1.1B Step 50K 105B   URL Share it on

  Autotrain compatible   Dataset:bigcode/starcoderdata Dataset:cerebras/slimpajama-62...   En   Endpoints compatible   Llama   Pytorch   Region:us   Safetensors

TinyLlama 1.1B Step 50K 105B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
TinyLlama 1.1B Step 50K 105B (TinyLlama/TinyLlama-1.1B-step-50K-105b)

TinyLlama 1.1B Step 50K 105B Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
many open-source projects, applications demanding restricted computation and memory footprint
Supported Languages 
en (Low)
Training Details 
Data Sources:
cerebras/SlimPajama-627B, bigcode/starcoderdata
Data Volume:
3 trillion tokens
Training Time:
90 days
Hardware Used:
16 A100-40G GPUs
Model Architecture:
Same architecture and tokenizer as Llama 2
Input Output 
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Use torch_dtype=torch.float16 and device_map='auto'.
Release Notes 
Version:
intermediate 50K steps, 105B tokens
Date:
2023-09-04
Notes:
Initial intermediate checkpoint release.
LLM NameTinyLlama 1.1B Step 50K 105B
Repository ๐Ÿค—https://huggingface.co/TinyLlama/TinyLlama-1.1B-step-50K-105b 
Model Size105b
Required VRAM4.4 GB
Updated2025-02-22
MaintainerTinyLlama
Model Typellama
Model Files  4.4 GB   4.4 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.31.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat32

Quantized Models of the TinyLlama 1.1B Step 50K 105B

Model
Likes
Downloads
VRAM
...ma 1.1B Step 50K 105B AWQ 4bit080 GB

Best Alternatives to TinyLlama 1.1B Step 50K 105B

Best Alternatives
Context / RAM
Downloads
Likes
Anubis Pro 105B V1128K / 210.7 GB28016
TinyLlama2K / 2.2 GB660
...yLlama 1.1B Step 50K 105B ONNX2K /  GB41
Tiny Llama 1.1b Odia Ext2K / 2.2 GB130
...ma 1.1B Step 50K 105B AWQ 4bit2K / 0.8 GB80
...a 1.1B Step 50K 105B Gptq 4bit2K / 0.8 GB771
Note: green Score (e.g. "73.2") means that the model is better than TinyLlama/TinyLlama-1.1B-step-50K-105b.

Rank the TinyLlama 1.1B Step 50K 105B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227