Llama 3.1 Tulu 3 8B DPO by allenai

 ยป  All LLMs  ยป  allenai  ยป  Llama 3.1 Tulu 3 8B DPO   URL Share it on

  Arxiv:2411.15124   Autotrain compatible Base model:allenai/llama-3.1-t... Base model:finetune:allenai/ll...   Conversational Dataset:allenai/llama-3.1-tulu...   En   Endpoints compatible   Llama   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Llama 3.1 Tulu 3 8B DPO Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.1 Tulu 3 8B DPO (allenai/Llama-3.1-Tulu-3-8B-DPO)

Llama 3.1 Tulu 3 8B DPO Parameters and Internals

Model Type 
Instruction following
Use Cases 
Areas:
Research, Education
Supported Languages 
en (Primary)
Training Details 
Data Sources:
Publicly available datasets, Synthetic datasets, Human-created datasets
Context Length:
2048
LLM NameLlama 3.1 Tulu 3 8B DPO
Repository ๐Ÿค—https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-DPO 
Base Model(s)  allenai/Llama-3.1-Tulu-3-8B-SFT   allenai/Llama-3.1-Tulu-3-8B-SFT
Model Size8b
Required VRAM16.1 GB
Updated2025-05-15
Maintainerallenai
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4   5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensellama3.1
Context Length131072
Model Max Length131072
Transformers Version4.43.4
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<pad>
Vocabulary Size128264
Torch Data Typebfloat16

Best Alternatives to Llama 3.1 Tulu 3 8B DPO

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB5687104
UltraLong Thinking4192K / 16.1 GB2712
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB102715
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB499441
Zero Llama 3.1 8B Beta61048K / 16.1 GB1051
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB532
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB392
Note: green Score (e.g. "73.2") means that the model is better than allenai/Llama-3.1-Tulu-3-8B-DPO.

Rank the Llama 3.1 Tulu 3 8B DPO Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227