Openbuddy Qwen1.5 32B V21.2 32K by OpenBuddy

 ยป  All LLMs  ยป  OpenBuddy  ยป  Openbuddy Qwen1.5 32B V21.2 32K   URL Share it on

  Autotrain compatible   Conversational   De   En   Fi   Fr   It   Ja   Ko   Qwen2   Region:us   Ru   Safetensors   Sharded   Tensorflow   Zh

Openbuddy Qwen1.5 32B V21.2 32K Benchmarks

Openbuddy Qwen1.5 32B V21.2 32K (OpenBuddy/openbuddy-qwen1.5-32b-v21.2-32k)

Openbuddy Qwen1.5 32B V21.2 32K Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, commercial applications
Limitations:
medical field, controlling software and hardware systems that may cause harm, important financial or legal decisions
Additional Notes 
Not related to GPT or OpenAI. Limitations and risks disclosed in disclaimer.
Supported Languages 
zh (unknown proficiency), en (unknown proficiency), fr (unknown proficiency), de (unknown proficiency), ja (unknown proficiency), ko (unknown proficiency), it (unknown proficiency), ru (unknown proficiency), fi (unknown proficiency)
Input Output 
Input Format:
Special tokens like <|role|>, <|says|>, <|end|> are used. Enabled by default in transformers and vllm libraries.
Accepted Modalities:
text
Output Format:
Textual responses in line with input prompts.
Performance Tips:
Special tokens support should be ensured for proper functionality.
LLM NameOpenbuddy Qwen1.5 32B V21.2 32K
Repository ๐Ÿค—https://huggingface.co/OpenBuddy/openbuddy-qwen1.5-32b-v21.2-32k 
Model Size32b
Required VRAM64.6 GB
Updated2025-01-15
MaintainerOpenBuddy
Model Typeqwen2
Model Files  4.9 GB: 1-of-14   4.8 GB: 2-of-14   4.8 GB: 3-of-14   4.8 GB: 4-of-14   4.8 GB: 5-of-14   4.8 GB: 6-of-14   4.8 GB: 7-of-14   4.8 GB: 8-of-14   4.8 GB: 9-of-14   4.8 GB: 10-of-14   4.8 GB: 11-of-14   4.8 GB: 12-of-14   4.8 GB: 13-of-14   2.1 GB: 14-of-14
Supported Languageszh en fr de ja ko it ru fi
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.40.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|pad|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Openbuddy Qwen1.5 32B V21.2 32K

Best Alternatives
Context / RAM
Downloads
Likes
Openbuddy Qwq 32B V24.2 200K195K / 65.8 GB733
Openbuddy Qwq 32B V24.1 200K195K / 65.8 GB581
...y Qwen2.5coder 32B V24.1q 200K195K / 65.8 GB242
Qwen2.5 32B128K / 65.5 GB2362760
Ultiima 32B128K / 65.8 GB1323
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB1083
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB381
Franqwenstein 35B128K / 69.8 GB2687
EVA Qwen2.5 32B V0.2128K / 65.8 GB219947
Qwenstein2.5 32B Instruct128K / 65.5 GB422
Note: green Score (e.g. "73.2") means that the model is better than OpenBuddy/openbuddy-qwen1.5-32b-v21.2-32k.

Rank the Openbuddy Qwen1.5 32B V21.2 32K Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41362 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227