Tinyllama Quant by ElxsiGenwizards

 ยป  All LLMs  ยป  ElxsiGenwizards  ยป  Tinyllama Quant   URL Share it on

  Arxiv:2309.05463   Autotrain compatible   Code   En   Endpoints compatible   Phi   Region:us

Tinyllama Quant Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tinyllama Quant (ElxsiGenwizards/tinyllama-quant)

Tinyllama Quant Parameters and Internals

Model Type 
text generation
Use Cases 
Primary Use Cases:
writing poems, drafting emails, creating stories, summarizing texts, writing Python code
Limitations:
generate inaccurate code and facts, limited scope for code, unreliable responses to instruction, language limitations, potential societal biases, toxicity
Additional Notes 
Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. It may produce irrelevant text following the main answer.
Supported Languages 
en (standard)
Training Details 
Data Sources:
same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts
Data Volume:
150B tokens
Training Time:
8 days
Hardware Used:
32xA100-40G
Model Architecture:
Transformer-based model with next-word prediction objective
LLM NameTinyllama Quant
Repository ๐Ÿค—https://huggingface.co/ElxsiGenwizards/tinyllama-quant 
Required VRAM0.8 GB
Updated2025-02-05
MaintainerElxsiGenwizards
Model Typephi
Model Files  0.0 GB   0.8 GB   0.8 GB
Supported Languagesen
Model ArchitecturePhiForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Transformers Version4.37.0
Tokenizer ClassCodeGenTokenizer
Vocabulary Size51200
Torch Data Typefloat16

Best Alternatives to Tinyllama Quant

Best Alternatives
Context / RAM
Downloads
Likes
Phi1.5 Quantized 22K /  GB50
Merlin1.42K / 5.6 GB1040
Merlin1.52K / 5.6 GB1030
Merlin1.22K / 5.6 GB980
Merlin1.32K / 5.6 GB1000
Phi Bode 2 Ultraalpaca2K / 5.6 GB5762
Phi 2 Fp82K / 3.3 GB890
Phi1.5 Update 42K /  GB50
Phi 2 Fp82K / 3.3 GB810
Phi 2 Int8 Ov2K / 0 GB560
Note: green Score (e.g. "73.2") means that the model is better than ElxsiGenwizards/tinyllama-quant.

Rank the Tinyllama Quant Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227