๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
SauerkrautLM Qwen 32B | — | 32K / 64.6 GB | 2165 | 4 |
...penbuddy Qwen1.5 32B V21.2 32K | — | 32K / 64.6 GB | 2289 | 3 |
...penbuddy Qwen1.5 32B V21.1 32K | — | 32K / 64.6 GB | 1983 | 3 |
Matter 0.2 32B | — | 32K / 64.6 GB | 767 | 2 |
Deita 32B | — | 32K / 64.6 GB | 861 | 1 |
Blossom V5 32B | — | 32K / 64.8 GB | 878 | 4 |
Qwen1.5 32B Chat | — | 32K / 65.5 GB | 42897 | 102 |
Qwen1.5 32B | — | 32K / 65.5 GB | 9091 | 75 |
...wen1.5 32B Chat 3.0bpw H6 EXL2 | — | 32K / 13.7 GB | 1 | 1 |
Qwen1.5 32B Chat Quip 3bit | — | 32K / 14.8 GB | 2 | 1 |
LLM Name | Einstein V4 Qwen 1.5 32B |
Repository | Open on ๐ค |
Base Model(s) | |
Model Size | 32b |
Required VRAM | 64.6 GB |
Updated | 2024-07-01 |
Maintainer | Weyaxi |
Model Type | qwen2 |
Model Files | |
Model Architecture | Qwen2ForCausalLM |
License | other |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.40.0.dev0 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 152064 |
Initializer Range | 0.02 |
Torch Data Type | bfloat16 |
Errors | replace |