Llama 3 70B Orpo V0.1 by dfurman

 ยป  All LLMs  ยป  dfurman  ยป  Llama 3 70B Orpo V0.1   URL Share it on

  Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/meta-lla...   Conversational Dataset:mlabonne/orpo-dpo-mix-...   En   Endpoints compatible   Llama   Llama 3   Model-index   Orpo   Region:us   Rlhf   Safetensors   Sft   Sharded   Tensorflow

Llama 3 70B Orpo V0.1 Benchmarks

Llama 3 70B Orpo V0.1 (dfurman/Llama-3-70B-Orpo-v0.1)

Llama 3 70B Orpo V0.1 Parameters and Internals

Model Type 
text generation
Additional Notes 
This is an ORPO fine-tune following the ChatML template designed for efficient generation with performance improvements on specific datasets.
Training Details 
Data Sources:
mlabonne/orpo-dpo-mix-40k
Data Volume:
2k samples
Methodology:
fine-tuning on 2k samples with ChatML template
Context Length:
8000
Input Output 
Input Format:
ChatML template
Accepted Modalities:
text
Output Format:
Generated text
Performance Tips:
Utilize Flash Attention on appropriate hardware
LLM NameLlama 3 70B Orpo V0.1
Repository ๐Ÿค—https://huggingface.co/dfurman/Llama-3-70B-Orpo-v0.1 
Base Model(s)  Meta Llama 3 70B   meta-llama/Meta-Llama-3-70B
Model Size70b
Required VRAM141.9 GB
Updated2025-02-05
Maintainerdfurman
Model Typellama
Model Files  4.6 GB: 1-of-30   4.7 GB: 2-of-30   5.0 GB: 3-of-30   5.0 GB: 4-of-30   4.7 GB: 5-of-30   4.7 GB: 6-of-30   4.7 GB: 7-of-30   5.0 GB: 8-of-30   5.0 GB: 9-of-30   4.7 GB: 10-of-30   4.7 GB: 11-of-30   4.7 GB: 12-of-30   5.0 GB: 13-of-30   5.0 GB: 14-of-30   4.7 GB: 15-of-30   4.7 GB: 16-of-30   4.7 GB: 17-of-30   5.0 GB: 18-of-30   5.0 GB: 19-of-30   4.7 GB: 20-of-30   4.7 GB: 21-of-30   4.7 GB: 22-of-30   5.0 GB: 23-of-30   5.0 GB: 24-of-30   4.7 GB: 25-of-30   4.7 GB: 26-of-30   4.7 GB: 27-of-30   5.0 GB: 28-of-30   5.0 GB: 29-of-30   2.1 GB: 30-of-30   0.0 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|im_end|>
Vocabulary Size128258
Torch Data Typefloat16

Best Alternatives to Llama 3 70B Orpo V0.1

Best Alternatives
Context / RAM
Downloads
Likes
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB40355
... 3 70B Instruct Gradient 1048K1024K / 141.9 GB114121
Llama3 Function Calling 1048K1024K / 141.9 GB51
...a 3 70B Instruct Gradient 524K512K / 141.9 GB15723
...a 3 70B Instruct Gradient 262K256K / 141.9 GB9355
...ama 3 70B Arimas Story RP V1.5256K / 141.2 GB1952
...ama 3 70B Arimas Story RP V2.0256K / 141.1 GB313
...ama 3 70B Arimas Story RP V1.6256K / 141.2 GB50
Yi 70B 200K RPMerge Franken195K / 142.4 GB71
DeepSeek R1 Distill Llama 70B128K / 141 GB156830453
Note: green Score (e.g. "73.2") means that the model is better than dfurman/Llama-3-70B-Orpo-v0.1.

Rank the Llama 3 70B Orpo V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227