Model Type |
| |||||||||
Use Cases |
| |||||||||
Additional Notes |
| |||||||||
Input Output |
|
LLM Name | Codellama CodeLlama 13B Instruct Hf W4 G128 AWQ |
Repository ๐ค | https://huggingface.co/abhinavkulkarni/codellama-CodeLlama-13b-Instruct-hf-w4-g128-awq |
Model Size | 13b |
Required VRAM | 7.2 GB |
Updated | 2024-12-22 |
Maintainer | abhinavkulkarni |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Supported Languages | code |
AWQ Quantization | Yes |
Quantization Type | awq |
Generates Code | Yes |
Model Architecture | LlamaForCausalLM |
License | llama2 |
Context Length | 16384 |
Model Max Length | 16384 |
Transformers Version | 4.33.1 |
Tokenizer Class | CodeLlamaTokenizer |
Beginning of Sentence Token | <s> |
End of Sentence Token | </s> |
Unk Token | <unk> |
Vocabulary Size | 32016 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
NexusRaven 13B AWQ | 16K / 7.2 GB | 35 | 4 |
CodeLlama 13B Instruct AWQ | 16K / 7.2 GB | 63 | 9 |
CodeLlama 13B Instruct Fp16 | 16K / 26 GB | 2006 | 29 |
...Llama 13B Instruct Hf 4bit MLX | 16K / 7.8 GB | 75 | 2 |
...13B Instruct Nf4 Fp16 Upscaled | 16K / 26 GB | 446 | 0 |
CodeLlama 13B MORepair | 16K / 26 GB | 2650 | 2 |
NexusRaven V2 13B | 16K / 26 GB | 3919 | 465 |
CodeLlama 13B Instruct Hf | 16K / 26 GB | 16223 | 144 |
CodeLlama 13B Instruct Hf | 16K / 26 GB | 993 | 18 |
TableLLM 13B | 16K / 26 GB | 235 | 25 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐