Tinyllama 32K by LouisML

 ยป  All LLMs  ยป  LouisML  ยป  Tinyllama 32K   URL Share it on

  Autotrain compatible Dataset:togethercomputer/redpa...   En   Endpoints compatible   Llama   Llama 2   Pytorch   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/LouisML/tinyllama_32k 

Tinyllama 32k Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tinyllama 32K (LouisML/tinyllama_32k)

Tinyllama 32K Parameters and Internals

Model Type 
text generation
Additional Notes 
This model was created using a long-context finetune of TinyLlama-1.1B, leveraging increased rope theta for enhanced speculative decoding capabilities.
Supported Languages 
en (proficient)
Training Details 
Data Sources:
togethercomputer/RedPajama-Data-1T-Sample
Data Volume:
1 trillion tokens
Methodology:
increased rope theta for long-context speculative decoding
Context Length:
32768
LLM NameTinyllama 32k
Repository ๐Ÿค—https://huggingface.co/LouisML/tinyllama_32k 
Model Size1.1b
Required VRAM2.2 GB
Updated2025-02-22
MaintainerLouisML
Model Typellama
Model Files  2.2 GB   2.2 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Tinyllama 32K

Best Alternatives
Context / RAM
Downloads
Likes
Coven Tiny 1.1b 32k Orpo Alpha32K / 2.2 GB1612
Test Mix 0132K / 2.2 GB1660
Palmer Merge Test 532K / 2.2 GB1570
...llama 1.1B 16K Instructions V432K / 2.2 GB980
Palmer 002 32K32K / 2.2 GB1710
TinyLlama 1.1B 32K Instruct32K / 2.2 GB23512
Tinyllama History Chat V1.132K / 2.2 GB1220
TinyLlama 1.1B 32K32K / 2.2 GB12128
TinyJ.O.S.I.E. 1.1B 32K Base32K / 2.2 GB301
TinyJ.O.S.I.E. 1.1B 32K Base32K / 2.2 GB51
Note: green Score (e.g. "73.2") means that the model is better than LouisML/tinyllama_32k.

Rank the Tinyllama 32K Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227