CodeQwen1.5 7B AWQ by TechxGenus

 ยป  All LLMs  ยป  TechxGenus  ยป  CodeQwen1.5 7B AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq   Conversational   En   Endpoints compatible   License:other   Pretrained   Quantized   Qwen2   Region:us   Safetensors

CodeQwen1.5 7B AWQ Benchmarks

Rank the CodeQwen1.5 7B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
CodeQwen1.5 7B AWQ (TechxGenus/CodeQwen1.5-7B-AWQ)

Best Alternatives to CodeQwen1.5 7B AWQ

Best Alternatives
HF Rank
CodeQwen1.5 7B Chat AWQ64K / 5.3 GB14756
CodeQwen1.5 7B AWQ64K / 5.3 GB2982
Qwen1.5 7B AWQ 4bit32K / 5.8 GB100
Qwen1.5 7B Chat AWQ32K / 5.9 GB209111
Qwen1.5 7B Chat AWQ32K / 5.9 GB70
CodeQwen1.5 7B EXL2 6.0bpw64K / 5.9 GB60
...odeQwen1.5 7B Chat EXL2 8.0bpw64K / 7.5 GB124
CodeQwen1.5 7B EXL2 8.0bpw64K / 7.5 GB51
Qwen1.5 7B Chat 4bit32K / 5.2 GB361
Qwen1.5 7B Bnb 4bit32K / 5.8 GB5550

CodeQwen1.5 7B AWQ Parameters and Internals

LLM NameCodeQwen1.5 7B AWQ
RepositoryOpen on ๐Ÿค— 
Base Model(s)  CodeQwen1.5 7B Chat   Qwen/CodeQwen1.5-7B-Chat
Model Size7b
Required VRAM4.9 GB
Model Typeqwen2
Model Files  4.9 GB
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureQwen2ForCausalLM
Context Length65536
Model Max Length65536
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<fim_pad>
Vocabulary Size92416
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 35549 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801