Model Type |
| ||||||||||||
Additional Notes |
| ||||||||||||
Supported Languages |
| ||||||||||||
Training Details |
| ||||||||||||
Input Output |
|
LLM Name | Qwen1.5 32B Chat |
Repository ๐ค | https://huggingface.co/Qwen/Qwen1.5-32B-Chat |
Model Size | 32b |
Required VRAM | 65.5 GB |
Updated | 2024-12-30 |
Maintainer | Qwen |
Model Type | qwen2 |
Model Files | |
Supported Languages | en |
Model Architecture | Qwen2ForCausalLM |
License | other |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.37.2 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 152064 |
Torch Data Type | bfloat16 |
Errors | replace |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
Qwen1.5 32B Chat AWQ | 17 | 55 | 21 GB |
Qwen1.5 32B Chat 4bit | 3 | 15 | 54 GB |
Qwen1.5 32B Chat 8bit | 1 | 12 | 54 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Openbuddy Qwq 32B V24.1 200K | 195K / 65.8 GB | 50 | 1 |
...y Qwen2.5coder 32B V24.1q 200K | 195K / 65.8 GB | 36 | 2 |
Qwen2.5 32B | 128K / 65.5 GB | 23446 | 58 |
Ultiima 32B | 128K / 65.8 GB | 26 | 3 |
...wen2.5 32B Inst BaseMerge TIES | 128K / 65.8 GB | 76 | 3 |
...wen2.5 32B Inst BaseMerge TIES | 128K / 65.8 GB | 53 | 1 |
Franqwenstein 35B | 128K / 69.8 GB | 161 | 7 |
EVA Qwen2.5 32B V0.2 | 128K / 65.8 GB | 2610 | 45 |
EVA Qwen2.5 32B V0.0 | 128K / 65.8 GB | 1303 | 24 |
EVA Qwen2.5 32B V0.1 | 128K / 65.8 GB | 1278 | 14 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐