LLM Name | Qwen1.5 7B Chat GPTQ Int8 |
Repository | Open on ๐ค |
Model Size | 7b |
Required VRAM | 9.1 GB |
Updated | 2024-07-27 |
Maintainer | Qwen |
Model Type | qwen2 |
Model Files | |
Supported Languages | en |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | Qwen2ForCausalLM |
License | other |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.37.0 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 151936 |
Torch Data Type | float16 |
Errors | replace |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Qwen2 7B Int4 GPTQ Wikitext2 | 0.2 | 128K / 5.6 GB | 21 | 0 |
CodeQwen1.5 7B Chat GPTQ Int4 | 0.2 | 64K / 4.9 GB | 173 | 0 |
Qwen2 7B Instruct GPTQ Int4 | 0.3 | 32K / 5.6 GB | 6880 | 14 |
Qwen2 7B Instruct GPTQ Int8 | 0.3 | 32K / 8.9 GB | 4224 | 13 |
Qwen1.5 7B Int3 GPTQ Wikitext2 | 0.3 | 32K / 5 GB | 22 | 0 |
Qwen1.5 7B Int4 GPTQ Wikitext2 | 0.2 | 32K / 5.8 GB | 20 | 0 |
AVA Qwen1.5 7B Chat Gptq 4bit | 0.2 | 32K / 5.8 GB | 20 | 0 |
Qwen1.5 7B Chat GPTQ Int4 | 0.2 | 32K / 5.9 GB | 371 | 18 |
Qwen2 7B Bnb 4bit | 0.3 | 128K / 5.5 GB | 5822 | 2 |
Tantrum 16bit | 0.3 | 128K / 15.2 GB | 80 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐