CodeQwen1.5 7B by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  CodeQwen1.5 7B   URL Share it on

  Autotrain compatible   En   Endpoints compatible   License:other   Pretrained   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

CodeQwen1.5 7B Benchmarks

Rank the CodeQwen1.5 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
CodeQwen1.5 7B (Qwen/CodeQwen1.5-7B)

Quantized Models of the CodeQwen1.5 7B

CodeQwen1.5 7B EXL2 8.0bpw157 GB
CodeQwen1.5 7B EXL2 6.0bpw065 GB

Best Alternatives to CodeQwen1.5 7B

Best Alternatives
HF Rank
SuperCode64K / 14.4 GB80
Nxcode CQ 7B Orpo64K / 14.5 GB88430
Qwen Theia Workshop64K / 14.5 GB3550
Svelte64K / 14.5 GB470
CodeQwen Text To Rule3 Merged64K / 14.5 GB120
CodeQwen1.5 7B Chat64K / 14.6 GB20263178
Biomistral 7B Instruct TIES32K / 1.2 GB230
Qwen 1.5 7B Layer Mix Bpw 2.232K / 4.7 GB100
Qwen 1.5 7B Layer Mix Bpw 2.532K / 4.8 GB80
Sailor 7B32K / 15.4 GB307027

CodeQwen1.5 7B Parameters and Internals

LLM NameCodeQwen1.5 7B
RepositoryOpen on ๐Ÿค— 
Model Size7b
Required VRAM14.6 GB
Model Typeqwen2
Model Files  3.9 GB: 1-of-4   4.0 GB: 2-of-4   4.0 GB: 3-of-4   2.7 GB: 4-of-4
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Context Length65536
Model Max Length65536
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<fim_pad>
Vocabulary Size92416
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 35549 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801