Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2   URL Share it on

  Autotrain compatible Base model:finetune:mlabonne/m... Base model:mlabonne/marcoro14-... Dataset:argilla/distilabel-int...   Endpoints compatible   Exl2   Mistral   Quantized   Region:us   Safetensors

Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 (LoneStriker/kellemar-DPO-Orca-Distilled-7B-SLERP-8.0bpw-h8-exl2)

Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 Parameters and Internals

Model Type 
text generation
Use Cases 
Primary Use Cases:
Basic inference, further fine-tuning
Training Details 
Data Sources:
https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs
Methodology:
Finetuned with DPO on Labonne's Google Colab Notebook for Mistral 7B
Context Length:
1024
LLM NameKellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/kellemar-DPO-Orca-Distilled-7B-SLERP-8.0bpw-h8-exl2 
Base Model(s)  Marcoro14 7B Slerp   mlabonne/Marcoro14-7B-slerp
Model Size7b
Required VRAM7.4 GB
Updated2025-02-05
MaintainerLoneStriker
Model Typemistral
Model Files  7.4 GB
Quantization Typeexl2
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...al Nemo Instruct 2407 Bnb 4bit1000K / 8.3 GB1367427
...istral Nemo Base 2407 Bnb 4bit1000K / 8.3 GB749214
...t 3.5 0106 128K 8.0bpw H8 EXL2128K / 7.4 GB61
...t 3.5 0106 128K 4.0bpw H6 EXL2128K / 3.9 GB51
...tral 7B Instruct V0.3 Bnb 4bit32K / 4.1 GB8294918
Mistral 7B Sci Pretrain32K / 4.1 GB5120
Mistral 7B V0.3 Bnb 4bit32K / 4.1 GB2635015
Mistral 7B Instruct V0.2 Fp1632K / 14.4 GB260
Mistral 7B Instruct V0.2 4bit32K / 4.3 GB2901
...tral 7B Instruct V0.2 Bnb 4bit32K / 4.1 GB1177032
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/kellemar-DPO-Orca-Distilled-7B-SLERP-8.0bpw-h8-exl2.

Rank the Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227