Qwen1.5 0.5B DPO Mix 7K by burtenshaw

 ยป  All LLMs  ยป  burtenshaw  ยป  Qwen1.5 0.5B DPO Mix 7K   URL Share it on

  Arxiv:1910.09700   Autotrain compatible   Conversational   En   Endpoints compatible   License:mit   Model-index   Qwen2   Region:us   Safetensors

Rank the Qwen1.5 0.5B DPO Mix 7K Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Qwen1.5 0.5B DPO Mix 7K (burtenshaw/Qwen1.5-0.5B-dpo-mix-7k)

Best Alternatives to Qwen1.5 0.5B DPO Mix 7K

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Qwen1.5 0.5B Chat32K /  GB64
Qwen1.5 0.5B32K /  GB151
Qwen1.5 0.5Bfinetuning32K / 0 GB150
D Qwen1.5 0.5B32K / 0.9 GB25976
Tau 0.5B Instruct32K / 0.9 GB2015
Qwen1.5 Wukong 0.5B32K / 0.9 GB29784
FinguAI Chat V132K / 0.9 GB21843
Tau 0.5B Instruct DPOP32K / 0.9 GB26402
Qwen1.5 0.5B Vortex32K / 0.9 GB46991
Qwen1.5 0.5B Vortex 0.132K / 0.9 GB20101

Qwen1.5 0.5B DPO Mix 7K Parameters and Internals

LLM NameQwen1.5 0.5B DPO Mix 7K
RepositoryOpen on ๐Ÿค— 
Model Size0.5b
Required VRAM1.2 GB
Updated2024-04-18
Maintainerburtenshaw
Model Typeqwen2
Model Files  1.2 GB   0.0 GB
Model ArchitectureQwen2ForCausalLM
Licensemit
Context Length32768
Model Max Length32768
Transformers Version4.39.2
Vocabulary Size151936
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 35008 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901