Smol Llama 220M Bees Internal by BEE-spoke-data

 ยป  All LLMs  ยป  BEE-spoke-data  ยป  Smol Llama 220M Bees Internal   URL Share it on

  Autotrain compatible Base model:bee-spoke-data/smol... Base model:finetune:bee-spoke-... Dataset:bee-spoke-data/bees-in...   En   Endpoints compatible   Generated from trainer   Llama   Region:us   Safetensors

Smol Llama 220M Bees Internal Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Smol Llama 220M Bees Internal (BEE-spoke-data/smol_llama-220M-bees-internal)

Smol Llama 220M Bees Internal Parameters and Internals

Model Type 
text-generation
Additional Notes 
The model fine-tuning involves various usage of hyperparameters such as a learning rate of 0.0001, training batch size of 4, and utilizes Adam Optimizer with specific betas (0.9,0.95) and epsilon (1e-08). Training proceeded with gradients accumulated every 8 steps, using a cosine learning rate scheduler with warmup ratio 0.05 for 2 epochs.
Supported Languages 
en (high proficiency)
Training Details 
Data Sources:
BEE-spoke-data/bees-internal
LLM NameSmol Llama 220M Bees Internal
Repository ๐Ÿค—https://huggingface.co/BEE-spoke-data/smol_llama-220M-bees-internal 
Base Model(s)  Smol Llama 220M GQA   BEE-spoke-data/smol_llama-220M-GQA
Model Size220m
Required VRAM0.4 GB
Updated2024-12-30
MaintainerBEE-spoke-data
Model Typellama
Model Files  0.4 GB   0.0 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32128
Torch Data Typebfloat16

Best Alternatives to Smol Llama 220M Bees Internal

Best Alternatives
Context / RAM
Downloads
Likes
...l Llama 220M GQA 32K Theta Sft32K / 0.4 GB142
Smol Llama 220M GQA 32K Theta32K / 0.4 GB151
Saily 220B4K / 417 GB171620
Smol Llama 220M GQA2K / 0.4 GB242512
Smol Llama 220M Openhermes2K / 0.4 GB10535
...mol Llama 220M GQA Fineweb Edu2K / 0.4 GB571
Smol Llama 220M Open Instruct2K / 0.4 GB382
Beecoder 220M Python2K / 0.4 GB302
Saily 220B GPTQ4K / 105.2 GB211
Saily 220B AWQ4K / 109.1 GB200
Note: green Score (e.g. "73.2") means that the model is better than BEE-spoke-data/smol_llama-220M-bees-internal.

Rank the Smol Llama 220M Bees Internal Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40476 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227