Llama 3 Tulu 2 DPO 8B by allenai

 ยป  All LLMs  ยป  allenai  ยป  Llama 3 Tulu 2 DPO 8B   URL Share it on

  Arxiv:2305.18290   Arxiv:2311.10702   Autotrain compatible Base model:allenai/llama-3-tul... Base model:finetune:allenai/ll...   Conversational Dataset:allenai/tulu-v2-sft-mi... Dataset:argilla/ultrafeedback-...   En   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

Llama 3 Tulu 2 DPO 8B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3 Tulu 2 DPO 8B (allenai/llama-3-tulu-2-dpo-8b)

Llama 3 Tulu 2 DPO 8B Parameters and Internals

Model Type 
NLP
Use Cases 
Areas:
Research
Applications:
Language models
Primary Use Cases:
Providing helpful assistance
Limitations:
Produces problematic outputs, Unknown training data size and composition
Additional Notes 
Not aligned for safe completions. Unknown training data corpus size.
Supported Languages 
en (Primary language)
Training Details 
Data Sources:
allenai/tulu-v2-sft-mixture, argilla/ultrafeedback-binarized-preferences-cleaned
Data Volume:
Unknown
Methodology:
Fine-tuning and further training using DPO
Responsible Ai Considerations 
Mitigation Strategies:
Not aligned to generate safe completions within the RLHF phase
Input Output 
Input Format:
<|user|> Your message here! <|assistant|>
Accepted Modalities:
Text
Performance Tips:
Add newlines after prompts for better performance.
LLM NameLlama 3 Tulu 2 DPO 8B
Repository ๐Ÿค—https://huggingface.co/allenai/llama-3-tulu-2-dpo-8b 
Base Model(s)  Llama 3 Tulu 2 8B   allenai/llama-3-tulu-2-8b
Model Size8b
Required VRAM16.1 GB
Updated2025-02-05
Maintainerallenai
Model Typellama
Model Files  10.0 GB: 1-of-2   6.1 GB: 2-of-2
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.31.0
Tokenizer ClassPreTrainedTokenizerFast
Beginning of Sentence Token<s>
End of Sentence Token</s>
Padding Token<pad>
Unk Token<unk>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3 Tulu 2 DPO 8B

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB6623678
MrRoboto ProLong 8B V4i1024K / 16.1 GB661
...o ProLongBASE Pt8 Unaligned 8B1024K / 16.1 GB240
Mpasila Viking 8B1024K / 16.1 GB590
41024K / 16.1 GB3220
Thor V1.4 8B DARK FICTION1024K / 16.1 GB9412
161024K / 16.1 GB1690
Because Im Bored Nsfw11024K / 16.1 GB661
111024K / 16.1 GB1130
NBeerbower Narrative 8B 64K1024K / 16.1 GB321
Note: green Score (e.g. "73.2") means that the model is better than allenai/llama-3-tulu-2-dpo-8b.

Rank the Llama 3 Tulu 2 DPO 8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227