SFR Iterative DPO LLaMA 3 8B R GGUF by sirovub

 ยป  All LLMs  ยป  sirovub  ยป  SFR Iterative DPO LLaMA 3 8B R GGUF   URL Share it on

  Arxiv:2312.11456   Arxiv:2405.07863   Autotrain compatible   Conversational   Endpoints compatible   Gguf   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

SFR Iterative DPO LLaMA 3 8B R GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
SFR Iterative DPO LLaMA 3 8B R GGUF (sirovub/SFR-Iterative-DPO-LLaMA-3-8B-R-GGUF)

SFR Iterative DPO LLaMA 3 8B R GGUF Parameters and Internals

Model Type 
text generation
Additional Notes 
SFR-Iterative-DPO-LLaMA-3-8B-R is a research model developed as part of our RLHF initiative at Salesforce. While safety and ethical considerations are integral to our alignment process, there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
Training Details 
Data Sources:
open-sourced datasets
Methodology:
Iterative DPO
LLM NameSFR Iterative DPO LLaMA 3 8B R GGUF
Repository ๐Ÿค—https://huggingface.co/sirovub/SFR-Iterative-DPO-LLaMA-3-8B-R-GGUF 
Model Size8b
Required VRAM16.1 GB
Updated2025-02-22
Maintainersirovub
Model Typellama
Model Files  16.1 GB   5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to SFR Iterative DPO LLaMA 3 8B R GGUF

Best Alternatives
Context / RAM
Downloads
Likes
10K V61024K / 16.1 GB50
...truct Gradient 1048K IMat GGUF1024K / 2 GB3506
...B Instruct Gradient 1048K GGUF1024K / 3.2 GB1523
Unhinged Llama3 8B 524K512K / 26.5 GB250
Llama 3 8B Instruct 262K GGUF256K / 3.2 GB1042
... 8B Instruct Reasoner 1o1 V0.3128K / 16.1 GB4177
Nsfw Sce Test128K / 16.1 GB100
Nsfw Plz Gguf Me128K / 16.1 GB383
Nsfw I Hate My Life V1128K / 32.1 GB160
Reflection Llama 3.1 8B128K / 16.1 GB227116
Note: green Score (e.g. "73.2") means that the model is better than sirovub/SFR-Iterative-DPO-LLaMA-3-8B-R-GGUF.

Rank the SFR Iterative DPO LLaMA 3 8B R GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227