๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
Qwen1.5 MoE A2.7B Chat 4bit | — | 32K / 8.5 GB | 1 | 4 |
LLM Name | Qwen1.5 MoE A2.7B Chat GPTQ Int4 |
Repository | Open on ๐ค |
Model Size | 2.5b |
Required VRAM | 8.4 GB |
Updated | 2024-07-01 |
Maintainer | Qwen |
Model Type | qwen2_moe |
Model Files | |
Supported Languages | en |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | Qwen2MoeForCausalLM |
License | other |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.39.0.dev0 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 151936 |
Initializer Range | 0.02 |
Torch Data Type | float16 |
Errors | replace |