Qwen 2.5 7B DPO Split1 16bit Chunk12 by Thamed-Chowdhury

 ยป  All LLMs  ยป  Thamed-Chowdhury  ยป  Qwen 2.5 7B DPO Split1 16bit Chunk12   URL Share it on

  16bit   Autotrain compatible Base model:finetune:unsloth/qw... Base model:unsloth/qwen2.5-7b-...   Conversational   Dpo   En   Endpoints compatible   Instruct   Quantized   Qwen2   Region:us   Safetensors   Sharded   Tensorflow   Trl   Unsloth

Qwen 2.5 7B DPO Split1 16bit Chunk12 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen 2.5 7B DPO Split1 16bit Chunk12 (Thamed-Chowdhury/qwen-2.5-7B-DPO-split1-16bit-chunk12)

Qwen 2.5 7B DPO Split1 16bit Chunk12 Parameters and Internals

LLM NameQwen 2.5 7B DPO Split1 16bit Chunk12
Repository ๐Ÿค—https://huggingface.co/Thamed-Chowdhury/qwen-2.5-7B-DPO-split1-16bit-chunk12 
Base Model(s)  unsloth/Qwen2.5-7B-Instruct   unsloth/Qwen2.5-7B-Instruct
Model Size7b
Required VRAM15.2 GB
Updated2025-02-22
MaintainerThamed-Chowdhury
Model Typeqwen2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.3 GB: 3-of-4   1.1 GB: 4-of-4
Supported Languagesen
Quantization Type16bit
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.44.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|PAD_TOKEN|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the Qwen 2.5 7B DPO Split1 16bit Chunk12

Model
Likes
Downloads
VRAM
....5 7B DPO Split1 16bit Chunk340715 GB

Best Alternatives to Qwen 2.5 7B DPO Split1 16bit Chunk12

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M 4bit986K / 4.3 GB7306
...B Instruct 1M Unsloth Bnb 4bit986K / 7.5 GB3541
...5 7B Instruct Unsloth Bnb 4bit32K / 7.2 GB342211
Mini QwQ32K / 15.2 GB271
AetherSett32K / 15.2 GB261
Qwen2.5 7B Instruct Bnb 4bit32K / 5.5 GB67479511
SphinX32K / 15.2 GB321
ReasonTest32K / 15.2 GB110
Kyro N1 7B32K / 15.2 GB1875
UIGEN 7B 16bit32K / 15.2 GB795
Note: green Score (e.g. "73.2") means that the model is better than Thamed-Chowdhury/qwen-2.5-7B-DPO-split1-16bit-chunk12.

Rank the Qwen 2.5 7B DPO Split1 16bit Chunk12 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227