Replete Qwen 2.5 3B CoT RP by bunnycore

 ยป  All LLMs  ยป  bunnycore  ยป  Replete Qwen 2.5 3B CoT RP   URL Share it on

  Merged Model   Arxiv:2306.01708   Arxiv:2311.03099   Autotrain compatible Base model:bunnycore/qwen-2.5-... Base model:bunnycore/qwen-2.5-...   Conversational   Endpoints compatible   Lora   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Replete Qwen 2.5 3B CoT RP Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Replete Qwen 2.5 3B CoT RP (bunnycore/Replete-Qwen-2.5-3b-CoT-RP)

Replete Qwen 2.5 3B CoT RP Parameters and Internals

Additional Notes 
This is a merge of pre-trained language models using mergekit
Training Details 
Methodology:
Merge using DARE and TIES methods
LLM NameReplete Qwen 2.5 3B CoT RP
Repository ๐Ÿค—https://huggingface.co/bunnycore/Replete-Qwen-2.5-3b-CoT-RP 
Base Model(s)  bunnycore/Qwen-2.5-3b-RP   Replete-AI/Replete-LLM-V2.5-Qwen-3b   bunnycore/Qwen-2.5-3b-rp-mix-lora   bunnycore/Qwen-2.5-3b-RP   bunnycore/Qwen-2.5-3b-rp-mix-lora   bunnycore/Qwen-2.5-3b-RP   Replete-AI/Replete-LLM-V2.5-Qwen-3b   bunnycore/Qwen-2.5-3b-rp-mix-lora   bunnycore/Qwen-2.5-3b-RP   bunnycore/Qwen-2.5-3b-rp-mix-lora
Merged ModelYes
Model Size3b
Required VRAM6.8 GB
Updated2025-03-11
Maintainerbunnycore
Model Typeqwen2
Model Files  5.0 GB: 1-of-2   1.8 GB: 2-of-2
Model ArchitectureQwen2ForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.44.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
LoRA ModelYes
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Replete Qwen 2.5 3B CoT RP

Best Alternatives
Context / RAM
Downloads
Likes
...Sft Stage1.2 Ss1 Expert How To128K / 15.1 GB2540
...b Sft Stage1.2 Ss1 Expert News128K / 15.1 GB2240
...t Stage1.2 Ss1 Expert Software128K / 15.1 GB2120
...2 Ss1 Expert Fictional Lyrical128K / 15.1 GB600
...b Sft Stage1.2 Ss1 Expert Math128K / 15.1 GB340
... Formattedtext Math Wiki Merge128K / 15.1 GB340
...ware Howto Formattedtext Merge128K / 15.1 GB350
...e1.2 Ss1 Expert Formatted Text128K / 15.1 GB330
...toredteam Helpful 0.25 Helpful128K / 15.1 GB910
...tage1.1 Ss1 With Math.no Issue128K / 15.1 GB210
Note: green Score (e.g. "73.2") means that the model is better than bunnycore/Replete-Qwen-2.5-3b-CoT-RP.

Rank the Replete Qwen 2.5 3B CoT RP Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 44804 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227