Snorkel Mistral PairRM DPO 8.0bpw H8 EXL2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  Snorkel Mistral PairRM DPO 8.0bpw H8 EXL2   URL Share it on

  Arxiv:2305.18290   Arxiv:2306.02561   Arxiv:2401.10020   Autotrain compatible   Conversational Dataset:snorkelai/snorkel-mist...   Endpoints compatible   Exl2   Mistral   Pytorch   Quantized   Region:us

Snorkel Mistral PairRM DPO 8.0bpw H8 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Snorkel Mistral PairRM DPO 8.0bpw H8 EXL2 (LoneStriker/Snorkel-Mistral-PairRM-DPO-8.0bpw-h8-exl2)

Snorkel Mistral PairRM DPO 8.0bpw H8 EXL2 Parameters and Internals

Model Type 
text-generation
Use Cases 
Limitations:
No moderation mechanisms included.
Additional Notes 
Endpoint for initial trials, not ongoing production use.
Training Details 
Data Sources:
snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset, UltraFeedback
Methodology:
1. Generate five response variations for each prompt from a subset of 20,000 using the LLM - to start, we used Mistral-7B-Instruct-v0.2. 2. Apply PairRM for response reranking. 3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses. 4. Use this LLM as the base model for the next iteration, repeating three times in total.
Input Output 
Input Format:
[INST] {prompt} [/INST]
LLM NameSnorkel Mistral PairRM DPO 8.0bpw H8 EXL2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/Snorkel-Mistral-PairRM-DPO-8.0bpw-h8-exl2 
Required VRAM7.4 GB
Updated2025-01-19
MaintainerLoneStriker
Model Typemistral
Model Files  7.4 GB   0.0 GB
Quantization Typeexl2
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Snorkel Mistral PairRM DPO 8.0bpw H8 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
NemoMix Unleashed EXL2 4bpw1000K / 7.3 GB547
...eZephir Sft Instruct Ead 16bit32K / 14.4 GB560
...cr To Json V1 HQQ 1bit Smashed32K / 1.6 GB230
...cr To Json V1 HQQ 4bit Smashed32K / 4.2 GB200
ScikitLLM Model EXL232K / 3 GB71
Chargen V2 8bpw EXL232K / 7.4 GB111
HamSter 0.2 8.0bpw H8 EXL232K / 7.4 GB131
...N L1 Chat RL V1.6.0bpw H6 EXL232K / 5.6 GB151
...N L1 Chat RL V1.3.0bpw H6 EXL232K / 3 GB140
...Bruins V2.1.1 8.0bpw H8 EXL2 232K / 7.4 GB132
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/Snorkel-Mistral-PairRM-DPO-8.0bpw-h8-exl2.

Rank the Snorkel Mistral PairRM DPO 8.0bpw H8 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41636 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227