LLM Name | Azma Deepseek Coder 1.3B Instruct Structured Output Peft Merge |
Repository ๐ค | https://huggingface.co/AswanthCManoj/azma-deepseek-coder-1.3b-instruct-structured-output-peft-merge |
Model Size | 1.3b |
Required VRAM | 2.7 GB |
Updated | 2025-02-22 |
Maintainer | AswanthCManoj |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Generates Code | Yes |
Model Architecture | LlamaForCausalLM |
Context Length | 16384 |
Model Max Length | 16384 |
Transformers Version | 4.37.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Padding Token | stic |
Vocabulary Size | 32256 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Deepseek Coder 1.3B Instruct | 16K / 2.7 GB | 64461 | 113 |
...c Deepseek Coder 1.3B Instruct | 16K / 5.4 GB | 74 | 0 |
Speechless Coder Ds 1.3B | 16K / 2.7 GB | 1868 | 0 |
Hpc Coder V2.1.3B | 16K / 2.7 GB | 131 | 4 |
... 1.3B Instruct Trt Int4 G64 Hf | 16K / 0.9 GB | 167 | 0 |
Datascience Coder 1.3B | 16K / 2.7 GB | 71 | 1 |
...pseek Coder 1.3B Instruct GPTQ | 16K / 0.9 GB | 330 | 6 |
...epseek Coder 1.3B Instruct AWQ | 8K / 0.9 GB | 133 | 2 |
...Coder 1.3B Function Calling V1 | 16K / 2.7 GB | 442 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐