WomboCombo R1 Coder 14B Preview by RDson

 ยป  All LLMs  ยป  RDson  ยป  WomboCombo R1 Coder 14B Preview   URL Share it on

  Autotrain compatible Base model:arcee-ai/supernova-... Base model:deepseek-ai/deepsee... Base model:merge:arcee-ai/supe... Base model:merge:deepseek-ai/d... Base model:merge:qwen/qwen2.5-... Base model:merge:qwen/qwen2.5-... Base model:qwen/qwen2.5-coder-... Base model:qwen/qwen2.5-coder-...   Codegen   Conversational   Endpoints compatible   Gguf   Merge   Mergekit   Q8   Quantized   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

WomboCombo R1 Coder 14B Preview Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
WomboCombo R1 Coder 14B Preview (RDson/WomboCombo-R1-Coder-14B-Preview)

WomboCombo R1 Coder 14B Preview Parameters and Internals

LLM NameWomboCombo R1 Coder 14B Preview
Repository ๐Ÿค—https://huggingface.co/RDson/WomboCombo-R1-Coder-14B-Preview 
Base Model(s)  Qwen/Qwen2.5-Coder-14B-Instruct   DeepSeek R1 Distill Qwen 14B   SuperNova Medius   Qwen/Qwen2.5-Coder-14B   Qwen/Qwen2.5-Coder-14B-Instruct   deepseek-ai/DeepSeek-R1-Distill-Qwen-14B   arcee-ai/SuperNova-Medius   Qwen/Qwen2.5-Coder-14B
Model Size14b
Required VRAM29.7 GB
Updated2025-03-21
MaintainerRDson
Model Typeqwen2
Model Files  15.7 GB   4.9 GB: 1-of-6   5.0 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   5.0 GB: 5-of-6   4.8 GB: 6-of-6
GGUF QuantizationYes
Quantization Typeq8|gguf
Generates CodeYes
Model ArchitectureQwen2ForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.48.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to WomboCombo R1 Coder 14B Preview

Best Alternatives
Context / RAM
Downloads
Likes
CogitoZ1432K / 29.7 GB251
...wen2.5 Coder 14B Instruct 4bit32K / 8.3 GB2650614
UIGEN T1.1 Qwen 14B32K / 29.7 GB134632
....5 Coder 14B Instruct Bnb 4bit32K / 9.9 GB97373
Qwen2.5 Coder 14B Bnb 4bit32K / 9.9 GB18004
UIGEN T1.2 Tailwind 14B32K / 29.7 GB461
Korean QWEN Coder2.5 14B32K / 29.7 GB602
ZYH LLM Qwen2.5 14B V4986K / 29.7 GB8054
Qwen2.5 14B YOYO V4986K / 29.7 GB3994
ZYH LLM Qwen2.5 14B V2986K / 29.7 GB1061
Note: green Score (e.g. "73.2") means that the model is better than RDson/WomboCombo-R1-Coder-14B-Preview.

Rank the WomboCombo R1 Coder 14B Preview Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45354 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227