๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Deita 32B | 72.16 | 32K / 64.6 GB | 1345 | 1 |
...penbuddy Qwen1.5 32B V21.1 32K | 70.75 | 32K / 64.6 GB | 2155 | 3 |
Matter 0.2 32B | 68.72 | 32K / 64.6 GB | 1383 | 2 |
Einstein V4 Qwen 1.5 32B | 68.54 | 32K / 64.6 GB | 514 | 2 |
SauerkrautLM Qwen 32B | 67.39 | 32K / 64.6 GB | 2410 | 4 |
LLM Name | Qwen1.5 32B |
Repository | Open on ๐ค |
Merged Model | Yes |
Model Size | 32.5b |
Required VRAM | 65.5 GB |
Updated | 2024-06-24 |
Maintainer | Qwen |
Model Type | qwen2 |
Model Files | |
Supported Languages | en |
Model Architecture | Qwen2ForCausalLM |
License | other |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.37.2 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 152064 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |
Errors | replace |