Model Type |
| |||||||||
Use Cases |
|
LLM Name | Codellama 13B Bnb 4bit |
Repository ๐ค | https://huggingface.co/unsloth/codellama-13b-bnb-4bit |
Model Size | 13b |
Required VRAM | 7.2 GB |
Updated | 2025-01-23 |
Maintainer | unsloth |
Model Type | llama |
Model Files | |
Supported Languages | en |
Quantization Type | 4bit |
Generates Code | Yes |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 16384 |
Model Max Length | 16384 |
Transformers Version | 4.44.2 |
Tokenizer Class | CodeLlamaTokenizer |
Padding Token | <unk> |
Vocabulary Size | 32016 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
WhiteRabbitNeo 13B V1 | 16K / 26 GB | 1353 | 409 |
CodeLlama 13B Python Fp16 | 16K / 26 GB | 1260 | 25 |
CodeLlama 13B Instruct Fp16 | 16K / 26 GB | 1375 | 29 |
CodeLlama 13B Fp16 | 16K / 26 GB | 14 | 66 |
...Llama 13B Instruct Hf 4bit MLX | 16K / 7.8 GB | 36 | 2 |
Trinity 13B | 16K / 26 GB | 21 | 15 |
Trinity 13B 6.0bpw H6 EXL2 | 16K / 10 GB | 11 | 2 |
Trinity 13B 4.0bpw H6 EXL2 | 16K / 6.8 GB | 12 | 1 |
Trinity 13B 3.0bpw H6 EXL2 | 16K / 5.2 GB | 11 | 1 |
Trinity 13B 5.0bpw H6 EXL2 | 16K / 8.4 GB | 11 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐