LLM Name | CodeLlama 13B Instruct Hf 4bit MLX |
Repository ๐ค | https://huggingface.co/mlx-community/CodeLlama-13b-Instruct-hf-4bit-MLX |
Model Size | 13b |
Required VRAM | 7.8 GB |
Updated | 2024-10-17 |
Maintainer | mlx-community |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Supported Languages | code |
Quantization Type | 4bit |
Generates Code | Yes |
Model Architecture | LlamaForCausalLM |
License | llama2 |
Context Length | 16384 |
Model Max Length | 16384 |
Transformers Version | 4.37.2 |
Vocabulary Size | 32016 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
CodeLlama 13B Instruct Fp16 | 16K / 26 GB | 1791 | 29 |
...13B Instruct Nf4 Fp16 Upscaled | 16K / 26 GB | 572 | 0 |
NexusRaven V2 13B | 16K / 26 GB | 6399 | 460 |
CodeLlama 13B Instruct Hf | 16K / 26 GB | 16682 | 143 |
CodeLlama 13B Instruct Hf | 16K / 26 GB | 1916 | 17 |
TableLLM 13B | 16K / 26 GB | 668 | 21 |
CodeLlama 13B Instruct GGUF | 16K / 5.4 GB | 355 | 2 |
NexusRaven 13B | 16K / 26 GB | 79 | 102 |
... Llama 2 13B Instruct Text2sql | 16K / 26 GB | 668 | 27 |
CodeLlama 13B Instruct AWQ | 16K / 7.2 GB | 285 | 9 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐