Tulu V2.5 Ppo 13B Uf Mean 70B Uf Rm by allenai

 ยป  All LLMs  ยป  allenai  ยป  Tulu V2.5 Ppo 13B Uf Mean 70B Uf Rm   URL Share it on

  Arxiv:2406.09279   Autotrain compatible   Base model:allenai/tulu-2-13b Base model:finetune:allenai/tu...   Conversational Dataset:allenai/tulu-2.5-prefe... Dataset:allenai/tulu-v2-sft-mi...   En   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

Tulu V2.5 Ppo 13B Uf Mean 70B Uf Rm Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tulu V2.5 Ppo 13B Uf Mean 70B Uf Rm (allenai/tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm)

Tulu V2.5 Ppo 13B Uf Mean 70B Uf Rm Parameters and Internals

Model Type 
RLHF tuned chat model
Use Cases 
Limitations:
The model may produce problematic outputs as it has not been aligned to generate safe completions within the RLHF phase., Unknown size and composition of the corpus used to train the base Llama 2 models, likely included a mix of Web data and technical sources.
Additional Notes 
The model was trained using a Jax DPO trainer on EasyLM. Larger RM benefited from a smaller KL penalty during training.
Supported Languages 
en (English)
Training Details 
Data Sources:
https://huggingface.co/datasets/allenai/tulu-2.5-preference-data, https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture
Methodology:
Trained using DPO and PPO starting from the Tulu 2 suite. Trained on the UltraFeedback dataset using PPO, with a 70B RM trained on the UltraFeedback dataset. Only the prompts from the ultrafeedback_mean_aspects split were used.
Input Output 
Input Format:
<|user|> Your message here! <|assistant|>
Performance Tips:
Ensure to include a newline after <|assistant|> as it affects generation quality.
LLM NameTulu V2.5 Ppo 13B Uf Mean 70B Uf Rm
Repository ๐Ÿค—https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm 
Base Model(s)  Tulu 2 13B   allenai/tulu-2-13b
Model Size13b
Required VRAM26 GB
Updated2025-02-22
Maintainerallenai
Model Typellama
Model Files  9.9 GB: 1-of-3   9.9 GB: 2-of-3   6.2 GB: 3-of-3
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.31.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Tulu V2.5 Ppo 13B Uf Mean 70B Uf Rm

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K128K / 26 GB4968113
Luminaura RP 13B128K / 26 GB270
Agent Llama2 13B 80K80K / 26.4 GB150
Chat Llama2 13B 80K80K / 52.8 GB130
Yarn Llama 2 13B 64K64K / 26 GB719017
LongAlign 13B 64K64K / 26 GB3113
LongAlign 13B 64K Base64K / 26 GB263
Openbuddy Llama2 13B V15p1 64K64K / 26.1 GB134
Openbuddy Llama2 13b64k V1564K / 26.1 GB161
Airoboros L2 13B 2.1 YaRN 64K64K / 26 GB137

Rank the Tulu V2.5 Ppo 13B Uf Mean 70B Uf Rm Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227