Openhands Lm 32B V0.1 by all-hands

 »  All LLMs  »  all-hands  »  Openhands Lm 32B V0.1   URL Share it on

  Arxiv:2412.21139   Agent Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-coder-...   Codegen   Coding   Conversational   Dataset:swe-gym/swe-gym   En   Instruct   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Openhands Lm 32B V0.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Openhands Lm 32B V0.1 (all-hands/openhands-lm-32b-v0.1)

Openhands Lm 32B V0.1 Parameters and Internals

LLM NameOpenhands Lm 32B V0.1
Repository 🤗https://huggingface.co/all-hands/openhands-lm-32b-v0.1 
Base Model(s)  Qwen/Qwen2.5-Coder-32B-Instruct   Qwen/Qwen2.5-Coder-32B-Instruct
Model Size32b
Required VRAM65.8 GB
Updated2025-04-07
Maintainerall-hands
Model Typeqwen2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-14   4.9 GB: 2-of-14   4.9 GB: 3-of-14   4.9 GB: 4-of-14   4.9 GB: 5-of-14   4.9 GB: 6-of-14   4.9 GB: 7-of-14   4.9 GB: 8-of-14   4.9 GB: 9-of-14   4.9 GB: 10-of-14   4.9 GB: 11-of-14   4.9 GB: 12-of-14   4.9 GB: 13-of-14   2.1 GB: 14-of-14
Supported Languagesen
Generates CodeYes
Model ArchitectureQwen2ForCausalLM
Licensemit
Context Length32768
Model Max Length32768
Transformers Version4.49.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Quantized Models of the Openhands Lm 32B V0.1

Model
Likes
Downloads
VRAM
Openhands Lm 32B V0.1 AWQ335819 GB

Best Alternatives to Openhands Lm 32B V0.1

Best Alternatives
Context / RAM
Downloads
Likes
...y Qwen2.5coder 32B V24.1q 200K195K / 65.8 GB162
Qwen2.5 32B YOYO MIX128K / 65.7 GB212
QwQ Qwen2.5 Coder Instruct 32B128K / 65.8 GB430
QwQenSeek Coder128K / 65.7 GB785
Rombos Coder V2.5 Qwen 32B128K / 65.8 GB3918
Tessa T1 32B117K / 65.8 GB7716
UIGEN T1.5 32B117K / 65.8 GB444
Qwen2.5 Coder 32B Instruct32K / 65.8 GB4888881767
OlympicCoder 32B32K / 65.8 GB4314147
Qwen2.5 Test 32B It32K / 65.8 GB8310
Note: green Score (e.g. "73.2") means that the model is better than all-hands/openhands-lm-32b-v0.1.

Rank the Openhands Lm 32B V0.1 Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46006 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227