Tulu 2 DPO 13B by chujiezheng

 ยป  All LLMs  ยป  chujiezheng  ยป  Tulu 2 DPO 13B   URL Share it on

  Arxiv:2305.18290   Arxiv:2311.10702   Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/llama-2-...   Conversational Dataset:allenai/tulu-v2-sft-mi... Dataset:huggingfaceh4/ultrafee...   En   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

Tulu 2 DPO 13B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tulu 2 DPO 13B (chujiezheng/tulu-2-dpo-13b)

Tulu 2 DPO 13B Parameters and Internals

Model Type 
instruction tuned, RLHF tuned, chat model
Use Cases 
Limitations:
The model can produce problematic outputs (especially when prompted to do so).
Supported Languages 
English (Primary)
Training Details 
Data Sources:
HuggingFaceH4/ultrafeedback_binarized, allenai/tulu-v2-sft-mixture, openbmb/UltraFeedback
Methodology:
Direct Preference Optimization (DPO)
Safety Evaluation 
Ethical Considerations:
The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models.
Input Output 
Input Format:
<|user|> Your message here! <|assistant|>
Performance Tips:
Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.
LLM NameTulu 2 DPO 13B
Repository ๐Ÿค—https://huggingface.co/chujiezheng/tulu-2-dpo-13b 
Base Model(s)  Llama 2 13B Hf   meta-llama/Llama-2-13b-hf
Model Size13b
Required VRAM26 GB
Updated2025-02-22
Maintainerchujiezheng
Model Typellama
Model Files  5.0 GB: 1-of-6   5.0 GB: 2-of-6   5.0 GB: 3-of-6   4.9 GB: 4-of-6   4.9 GB: 5-of-6   1.2 GB: 6-of-6
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Tulu 2 DPO 13B

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K128K / 26 GB4968113
Luminaura RP 13B128K / 26 GB270
Agent Llama2 13B 80K80K / 26.4 GB150
Chat Llama2 13B 80K80K / 52.8 GB130
Yarn Llama 2 13B 64K64K / 26 GB719017
LongAlign 13B 64K64K / 26 GB3113
LongAlign 13B 64K Base64K / 26 GB263
Openbuddy Llama2 13B V15p1 64K64K / 26.1 GB134
Openbuddy Llama2 13b64k V1564K / 26.1 GB161
Airoboros L2 13B 2.1 YaRN 64K64K / 26 GB137
Note: green Score (e.g. "73.2") means that the model is better than chujiezheng/tulu-2-dpo-13b.

Rank the Tulu 2 DPO 13B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227