OpenBezoar HH RLHF DPO by SurgeGlobal

 ยป  All LLMs  ยป  SurgeGlobal  ยป  OpenBezoar HH RLHF DPO   URL Share it on

  Arxiv:2305.18290   Arxiv:2306.02707   Arxiv:2404.12195   Autotrain compatible Base model:finetune:surgegloba... Base model:surgeglobal/openbez...   Dataset:anthropic/hh-rlhf   En   Endpoints compatible   Llama   Pytorch   Region:us   Safetensors

OpenBezoar HH RLHF DPO Benchmarks

OpenBezoar HH RLHF DPO (SurgeGlobal/OpenBezoar-HH-RLHF-DPO)

OpenBezoar HH RLHF DPO Parameters and Internals

Model Type 
text-generation
Use Cases 
Limitations:
The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops., Although this model is aligned to human preferences and has been evaluated for performance, it is not guaranteed that it will refrain from generating harmful content exclusively., Caution is urged against relying on this model for production or adjacent use-cases.
Training Details 
Data Sources:
Anthropic's HH-RLHF Dataset
Data Volume:
First 100K examples
Methodology:
Direct Preference Optimization (DPO)
Model Architecture:
OpenLLaMA 3B v2 architecture
Input Output 
Input Format:
Modified version of the Alpaca prompt template
Performance Tips:
Utilize the Alpaca prompt template to obtain best responses for instruction-related tasks.
LLM NameOpenBezoar HH RLHF DPO
Repository ๐Ÿค—https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-DPO 
Base Model(s)  OpenBezoar HH RLHF SFT   SurgeGlobal/OpenBezoar-HH-RLHF-SFT
Model Size3b
Required VRAM6.8 GB
Updated2025-02-22
MaintainerSurgeGlobal
Model Typellama
Model Files  6.8 GB   6.8 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-4.0
Context Length2048
Model Max Length2048
Transformers Version4.33.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to OpenBezoar HH RLHF DPO

Best Alternatives
Context / RAM
Downloads
Likes
Llama 3.2 3B Instruct128K / 6.5 GB16030891085
Llama 3.2 3B128K / 6.5 GB292157509
Hermes 3 Llama 3.2 3B128K / 6.5 GB21968144
ReasoningCore 3B RE1 V2128K / 6.5 GB1160
...penReasoner Llama 3.2 3B Rs1.0128K / 6.5 GB1471
Zeitgeist 3B V1128K / 6.5 GB492
...eflection L3.2 JametMiniMix 3B128K / 6.4 GB1100
Calme 3.1 Llamaloi 3B128K / 10.6 GB29111
Dolphin3.0 Llama3.2 3B128K / 6.5 GB1880036
... 3.2 3B Math Instruct RE1 ORPO128K / 6.5 GB1350
Note: green Score (e.g. "73.2") means that the model is better than SurgeGlobal/OpenBezoar-HH-RLHF-DPO.

Rank the OpenBezoar HH RLHF DPO Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227