Tulu 2 DPO 70B by allenai

 ยป  All LLMs  ยป  allenai  ยป  Tulu 2 DPO 70B   URL Share it on

  Arxiv:2305.18290   Arxiv:2311.10702   Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/llama-2-...   Conversational Dataset:allenai/tulu-v2-sft-mi... Dataset:huggingfaceh4/ultrafee...   En   Endpoints compatible   Llama   Pytorch   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/allenai/tulu-2-dpo-70b 

Tulu 2 DPO 70B Benchmarks

Tulu 2 DPO 70B (allenai/tulu-2-dpo-70b)

Tulu 2 DPO 70B Parameters and Internals

Model Type 
Instruction and RLHF tuned chat model
Use Cases 
Areas:
Research, Commercial applications
Primary Use Cases:
Instruction following, Dialog generation
Limitations:
The model can produce problematic outputs, especially when prompted to do so
Additional Notes 
Model is a fine-tuned version of Llama 2 and uses a mix of publicly available, synthetic, and human datasets.
Supported Languages 
en (Primary)
Training Details 
Data Sources:
HuggingFaceH4/ultrafeedback_binarized, allenai/tulu-v2-sft-mixture, openbmb/UltraFeedback
Data Volume:
64k prompts and model completions
Methodology:
Fine-tuning with Direct Preference Optimization (DPO)
Input Output 
Input Format:
<|user|> Your message here! <|assistant|>
Accepted Modalities:
text
Performance Tips:
Include newline after `<|assistant|>` for better results
LLM NameTulu 2 DPO 70B
Repository ๐Ÿค—https://huggingface.co/allenai/tulu-2-dpo-70b 
Base Model(s)  Llama 2 70B Hf   meta-llama/Llama-2-70b-hf
Model Size70b
Required VRAM138 GB
Updated2025-01-14
Maintainerallenai
Model Typellama
Model Files  9.8 GB: 1-of-15   9.8 GB: 2-of-15   10.0 GB: 3-of-15   9.8 GB: 4-of-15   9.8 GB: 5-of-15   9.8 GB: 6-of-15   10.0 GB: 7-of-15   9.8 GB: 8-of-15   9.8 GB: 9-of-15   9.8 GB: 10-of-15   10.0 GB: 11-of-15   9.8 GB: 12-of-15   9.8 GB: 13-of-15   9.5 GB: 14-of-15   0.5 GB: 15-of-15   9.8 GB: 1-of-15   9.8 GB: 2-of-15   10.0 GB: 3-of-15   9.8 GB: 4-of-15   9.8 GB: 5-of-15   9.8 GB: 6-of-15   10.0 GB: 7-of-15   9.8 GB: 8-of-15   9.8 GB: 9-of-15   9.8 GB: 10-of-15   10.0 GB: 11-of-15   9.8 GB: 12-of-15   9.8 GB: 13-of-15   9.5 GB: 14-of-15   0.5 GB: 15-of-15
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.33.2
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Tulu 2 DPO 70B

Model
Likes
Downloads
VRAM
Tulu 2 DPO 70B GGUF65829 GB
Tulu 2 DPO 70B GPTQ31935 GB
Tulu 2 DPO 70B AWQ31536 GB

Best Alternatives to Tulu 2 DPO 70B

Best Alternatives
Context / RAM
Downloads
Likes
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB30095
... 3 70B Instruct Gradient 1048K1024K / 141.9 GB327121
Llama3 Function Calling 1048K1024K / 141.9 GB181
...a 3 70B Instruct Gradient 524K512K / 141.9 GB6723
...a 3 70B Instruct Gradient 262K256K / 141.9 GB8755
...ama 3 70B Arimas Story RP V2.0256K / 141.1 GB513
...ama 3 70B Arimas Story RP V1.6256K / 141.2 GB230
...ama 3 70B Arimas Story RP V1.5256K / 141.2 GB262
Yi 70B 200K RPMerge Franken195K / 142.4 GB121
...a 3.1 Nemotron 70B Instruct HF128K / 141.9 GB3597611988
Note: green Score (e.g. "73.2") means that the model is better than allenai/tulu-2-dpo-70b.

Rank the Tulu 2 DPO 70B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41301 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227