LLaMA3 Iterative DPO Final by RLHFlow

 ยป  All LLMs  ยป  RLHFlow  ยป  LLaMA3 Iterative DPO Final   URL Share it on

  Arxiv:2312.11456   Arxiv:2405.07863   Autotrain compatible   Conversational   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

LLaMA3 Iterative DPO Final Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
LLaMA3 Iterative DPO Final (RLHFlow/LLaMA3-iterative-DPO-final)

LLaMA3 Iterative DPO Final Parameters and Internals

Model Type 
Large Language Model, Instruction Following
Use Cases 
Areas:
Research
Applications:
Language generation, Chat assistance, Instruction following
Primary Use Cases:
Instruction following, Multi-turn dialogue
Limitations:
Potential to generate offensive or unethical content under adversarial conditions
Considerations:
Continuous improvement and encourage responsible usage.
Additional Notes 
Unofficial checkpoint for research purposes.
Supported Languages 
English (high)
Training Details 
Data Sources:
Preference data mix, Prompt collection for RLHF training
Data Volume:
700K
Methodology:
Iterative DPO with online RLHF
Training Time:
Not specified
Model Architecture:
Iterative DPO based training
Safety Evaluation 
Methodologies:
Not explicitly stated
Findings:
Potential for offensive content under adversarial conditions
Risk Categories:
Offensive content, Ethical considerations
Ethical Considerations:
Safety and ethical considerations are integral to the alignment process.
Responsible Ai Considerations 
Fairness:
Not specified
Transparency:
Technical report available
Accountability:
Developers and affiliated institution
Mitigation Strategies:
Continuous improvement in model safety
Input Output 
Input Format:
Chat-like prompts
Accepted Modalities:
text
Output Format:
Text generation
Performance Tips:
Optimal performance on CUDA-enabled devices
Release Notes 
Version:
Final
Notes:
Initial release of the unofficial checkpoint showcasing online iterative RLHF.
LLM NameLLaMA3 Iterative DPO Final
Repository ๐Ÿค—https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final 
Model Size8b
Required VRAM16.1 GB
Updated2025-02-22
MaintainerRLHFlow
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.40.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to LLaMA3 Iterative DPO Final

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB3927680
MrRoboto ProLong 8B V4i1024K / 16.1 GB661
...o ProLongBASE Pt8 Unaligned 8B1024K / 16.1 GB240
MrRoboto BASE V2 Unholy 8B 64K1024K / 16.1 GB271
Mpasila Viking 8B1024K / 16.1 GB840
Thor V1.4 8B DARK FICTION1024K / 16.1 GB9412
41024K / 16.1 GB3220
Hel V2 8B DARK FICTION1024K / 16.1 GB220
161024K / 16.1 GB1690
...di95 LewdStorytellerMix 8B 64K1024K / 16.1 GB692
Note: green Score (e.g. "73.2") means that the model is better than RLHFlow/LLaMA3-iterative-DPO-final.

Rank the LLaMA3 Iterative DPO Final Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227