Qwen1.5 4B Chat GPTQ Int4 by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen1.5 4B Chat GPTQ Int4   URL Share it on

  4-bit   Autotrain compatible   Chat   Conversational   En   Endpoints compatible   Gptq   License:other   Quantized   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Rank the Qwen1.5 4B Chat GPTQ Int4 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Qwen1.5 4B Chat GPTQ Int4 (Qwen/Qwen1.5-4B-Chat-GPTQ-Int4)

Best Alternatives to Qwen1.5 4B Chat GPTQ Int4

Best Alternatives
HF Rank
Qwen1.5 4B Chat GPTQ32K / 3.4 GB90
Qwen1.5 4B Chat GPTQ Int832K / 4.8 GB1393
Qwen1.5 4B Chat 3.0bpw H6 EXL232K / 2.3 GB70
Qwen1.5 4B Chat 4.0bpw H6 EXL232K / 2.7 GB50
Qwen1.5 4B Chat 4bit32K / 2.8 GB180
Qwen1.5 4B Chat 5.0bpw H6 EXL232K / 3.1 GB50
Qwen1.5 4B Chat GGUF32K / 1.6 GB18660
Qwen 1.5 4B Layer Mix Bpw 2.532K / 2.7 GB100
Qwen1.5 4B Chat AWQ32K / 3.2 GB15022
Sailor 4B AWQ32K / 3.2 GB80

Qwen1.5 4B Chat GPTQ Int4 Parameters and Internals

LLM NameQwen1.5 4B Chat GPTQ Int4
RepositoryOpen on ๐Ÿค— 
Model Size4b
Required VRAM3.2 GB
Model Typeqwen2
Model Files  2.0 GB: 1-of-2   1.2 GB: 2-of-2
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureQwen2ForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.37.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 35008 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901