Tinyllama 1.1B Sum DPO Full LR2e 7 3epochs Old by martimfasantos

 ยป  All LLMs  ยป  martimfasantos  ยป  Tinyllama 1.1B Sum DPO Full LR2e 7 3epochs Old   URL Share it on

  Alignment-handbook   Autotrain compatible Base model:finetune:martimfasa... Base model:martimfasantos/tiny... Dataset:openai/summarize from ...   Dpo   Endpoints compatible   Generated from trainer   Llama   Region:us   Safetensors   Tensorboard   Trl

Tinyllama 1.1B Sum DPO Full LR2e 7 3epochs Old Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tinyllama 1.1B Sum DPO Full LR2e 7 3epochs Old (martimfasantos/tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs_old)

Tinyllama 1.1B Sum DPO Full LR2e 7 3epochs Old Parameters and Internals

Training Details 
Data Sources:
openai/summarize_from_feedback
LLM NameTinyllama 1.1B Sum DPO Full LR2e 7 3epochs Old
Repository ๐Ÿค—https://huggingface.co/martimfasantos/tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs_old 
Base Model(s)  martimfasantos/tinyllama-1.1b-sum-sft-full_old   martimfasantos/tinyllama-1.1b-sum-sft-full_old
Model Size1.1b
Required VRAM4.4 GB
Updated2025-02-22
Maintainermartimfasantos
Model Typellama
Model Files  4.4 GB   0.0 GB
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.41.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat32

Best Alternatives to Tinyllama 1.1B Sum DPO Full LR2e 7 3epochs Old

Best Alternatives
Context / RAM
Downloads
Likes
Coven Tiny 1.1b 32k Orpo Alpha32K / 2.2 GB1612
Test Mix 0132K / 2.2 GB1660
Palmer Merge Test 532K / 2.2 GB1570
...llama 1.1B 16K Instructions V432K / 2.2 GB980
Palmer 002 32K32K / 2.2 GB1710
TinyLlama 1.1B 32K Instruct32K / 2.2 GB23512
Tinyllama History Chat V1.132K / 2.2 GB1220
TinyLlama 1.1B 32K32K / 2.2 GB12128
TinyJ.O.S.I.E. 1.1B 32K Base32K / 2.2 GB301
Tinyllama 32k32K / 2.2 GB840
Note: green Score (e.g. "73.2") means that the model is better than martimfasantos/tinyllama-1.1b-sum-dpo-full_LR2e-7_3epochs_old.

Rank the Tinyllama 1.1B Sum DPO Full LR2e 7 3epochs Old Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227