LLM Name | CodeQwen1.5 7B Chat GGUF |
Repository | Open on ๐ค |
Model Name | Openchat 3.5 0106 |
Model Creator | Qwen |
Base Model(s) | |
Model Size | 7b |
Required VRAM | 3 GB |
Updated | 2024-07-26 |
Maintainer | second-state |
Model Type | mistral |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|q2|q4_k|q5_k |
Model Architecture | Qwen2ForCausalLM |
License | other |
Context Length | 65536 |
Model Max Length | 65536 |
Transformers Version | 4.39.3 |
Vocabulary Size | 92416 |
Torch Data Type | bfloat16 |
Best Alternatives |
HF Rank |
Context/RAM |
Downloads |
Likes |
---|---|---|---|---|
SvelteCodeQwen1.5 7B Chat | 0.2 | 64K / 14.5 GB | 18 | 0 |
Qwen2 7B Instruct GGUF | 0.3 | 32K / 3 GB | 222 | 1 |
Qwen2 7B Instruct GGUF | 0.3 | 32K / 3 GB | 159 | 0 |
Qwen1.5 7B Chat GGUF | 0.2 | 32K / 3.1 GB | 174 | 1 |
Qwen2 7B Bnb 4bit | 0.3 | 128K / 5.5 GB | 6101 | 2 |
Tantrum 16bit | 0.3 | 128K / 15.2 GB | 80 | 0 |
Qwen2 7B Matter 0.1 Slim A | 0.2 | 128K / 15.2 GB | 13 | 2 |
PiSSA Qwen2 7B 4bit R128 5iter | 0.2 | 128K / 5.9 GB | 48 | 0 |
...lphin 2.9.2 Qwen2 7B Bpw6 EXL2 | 0.2 | 128K / 6.4 GB | 20 | 1 |
Qwen2 7B 8bit | 0.2 | 128K / 8.1 GB | 18 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐