DPO Qlora Qwen1.5 0.5B Chat Xtuner by Amu

 ยป  All LLMs  ยป  Amu  ยป  DPO Qlora Qwen1.5 0.5B Chat Xtuner   URL Share it on

  Autotrain compatible   Conversational   En   Endpoints compatible   Pytorch   Qwen2   Region:us

DPO Qlora Qwen1.5 0.5B Chat Xtuner Benchmarks

DPO Qlora Qwen1.5 0.5B Chat Xtuner (Amu/dpo-qlora-Qwen1.5-0.5B-Chat-xtuner)

DPO Qlora Qwen1.5 0.5B Chat Xtuner Parameters and Internals

Model Type 
dpo
Use Cases 
Limitations:
Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements., Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning.
Training Details 
Data Sources:
HuggingFaceH4/ultrafeedback_binarized
Methodology:
Direct preference optimization (DPO)
LLM NameDPO Qlora Qwen1.5 0.5B Chat Xtuner
Repository ๐Ÿค—https://huggingface.co/Amu/dpo-qlora-Qwen1.5-0.5B-Chat-xtuner 
Model Size0.5b
Required VRAM0.9 GB
Updated2025-02-22
MaintainerAmu
Model Typeqwen2
Model Files  0.9 GB
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.39.3
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typefloat16
Errorsreplace

Best Alternatives to DPO Qlora Qwen1.5 0.5B Chat Xtuner

Best Alternatives
Context / RAM
Downloads
Likes
Reader Lm 0.5B250K / 1 GB450135
Qwen2 0.5B128K / 1 GB623886134
...PRYMMAL 0.5B FT V4 MUSR Mathis128K / 1 GB621
...0.5B FT EnhancedMUSREnsembleV3128K / 1 GB321
....5B FT V4 MUSR ENSEMBLE Mathis128K / 2 GB301
...0.5B FT MUSR ENSEMBLE V2Mathis128K / 1 GB250
ECE PRYMMAL 0.5B FT V3 MUSR128K / 2 GB670
ECE PRYMMAL 0.5B FT V4 MUSR128K / 2 GB680
Qwen2 0.5B128K / 1 GB55703
ECE PRYMMAL0.5B Youri128K / 1.3 GB31
Note: green Score (e.g. "73.2") means that the model is better than Amu/dpo-qlora-Qwen1.5-0.5B-Chat-xtuner.

Rank the DPO Qlora Qwen1.5 0.5B Chat Xtuner Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227