Model Type |
| |||||||||||||||
Use Cases |
| |||||||||||||||
Supported Languages |
| |||||||||||||||
Training Details |
| |||||||||||||||
Input Output |
|
LLM Name | Deepseek Coder 1.3B Instruct AWQ |
Repository 🤗 | https://huggingface.co/TheBloke/deepseek-coder-1.3b-instruct-AWQ |
Model Name | Deepseek Coder 1.3B Instruct |
Model Creator | DeepSeek |
Base Model(s) | |
Model Size | 1.3b |
Required VRAM | 0.9 GB |
Updated | 2025-02-10 |
Maintainer | TheBloke |
Model Type | deepseek |
Instruction-Based | Yes |
Model Files | |
AWQ Quantization | Yes |
Quantization Type | awq |
Generates Code | Yes |
Model Architecture | LlamaForCausalLM |
License | other |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.35.0 |
Tokenizer Class | LlamaTokenizerFast |
Beginning of Sentence Token | <|begin▁of▁sentence|> |
End of Sentence Token | <|EOT|> |
Vocabulary Size | 32256 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Deepseek Coder 1.3B Instruct | 16K / 2.7 GB | 42449 | 108 |
...c Deepseek Coder 1.3B Instruct | 16K / 5.4 GB | 142 | 0 |
Speechless Coder Ds 1.3B | 16K / 2.7 GB | 1435 | 0 |
Hpc Coder V2.1.3B | 16K / 2.7 GB | 98 | 4 |
... 1.3B Instruct Trt Int4 G64 Hf | 16K / 0.9 GB | 138 | 0 |
...t Structured Output Peft Merge | 16K / 2.7 GB | 98 | 0 |
Datascience Coder 1.3B | 16K / 2.7 GB | 33 | 1 |
...pseek Coder 1.3B Instruct GPTQ | 16K / 0.9 GB | 248 | 6 |
...Coder 1.3B Function Calling V1 | 16K / 2.7 GB | 413 | 1 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟