MicroLlama by keeeeenw

 ยป  All LLMs  ยป  keeeeenw  ยป  MicroLlama   URL Share it on

  Arxiv:2401.02385   Autotrain compatible Dataset:cerebras/slimpajama-62...   En   Endpoints compatible   Llama   Model-index   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/keeeeenw/MicroLlama 

MicroLlama Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MicroLlama (keeeeenw/MicroLlama)

MicroLlama Parameters and Internals

Model Type 
text-generation
Additional Notes 
Project evolved from TinyLlama; a smaller model setup was developed to fit budget and resource constraints.
Supported Languages 
en (NLP)
Training Details 
Data Sources:
cerebras/SlimPajama-627B
Data Volume:
50B tokens
Context Length:
2048
Hardware Used:
4 x Nvidia 4090
Model Architecture:
block_size=2048, vocab_size=32000, padding_multiple=64, n_layer=12, n_head=16, n_embd=1024, rotary_percentage=1.0, parallel_residual=False, bias=False, _norm_class="FusedRMSNorm", norm_eps=1e-5, _mlp_class="LLaMAMLP", intermediate_size=5632, n_query_groups=4
LLM NameMicroLlama
Repository ๐Ÿค—https://huggingface.co/keeeeenw/MicroLlama 
Model Size627b
Required VRAM1.2 GB
Updated2025-01-15
Maintainerkeeeeenw
Model Typellama
Model Files  1.2 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.39.1
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat32

Quantized Models of the MicroLlama

Model
Likes
Downloads
VRAM
...llama Python Instruct 0.3 4bit0240 GB

Best Alternatives to MicroLlama

Best Alternatives
Context / RAM
Downloads
Likes
TinyLlama V1.12K / 4.4 GB5278680
CroissantLLMBase2K / 5.4 GB64831
TinyLlama V1.1 Math Code2K / 4.4 GB150910
TinyLlama 1.1B 1T OpenOrca2K / 2.2 GB2577
...Llama 1.1B 1.5T OpenOrca Alpha2K / 2.2 GB734
TinyLlama V1.1 Chinese2K / 4.4 GB1637
...inyLlama 1.1B 1T OpenOrca GPTQ2K / 0.8 GB172
TinyLlama 1.1B 1T OpenOrca AWQ2K / 0.8 GB162
Note: green Score (e.g. "73.2") means that the model is better than keeeeenw/MicroLlama.

Rank the MicroLlama Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41362 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227