Model Type |
| |||
Additional Notes |
| |||
Input Output |
|
LLM Name | CodeLlama 70B Instruct Hf 4bit MLX |
Repository ๐ค | https://huggingface.co/mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX |
Model Size | 70b |
Required VRAM | 39.1 GB |
Updated | 2024-12-22 |
Maintainer | mlx-community |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Supported Languages | code |
Quantization Type | 4bit |
Generates Code | Yes |
Model Architecture | LlamaForCausalLM |
License | llama2 |
Context Length | 4096 |
Model Max Length | 4096 |
Transformers Version | 4.36.2 |
Vocabulary Size | 32016 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...70B Instruct Nf4 Fp16 Upscaled | 4K / 138.7 GB | 409 | 2 |
...70B Instruct Hf 5.0bpw H6 EXL2 | 2K / 43.6 GB | 7 | 6 |
...0B Instruct Hf 2.65bpw H6 EXL2 | 2K / 23.4 GB | 9 | 3 |
...70B Instruct Hf 2.4bpw H6 EXL2 | 2K / 21.3 GB | 10 | 1 |
...70B Instruct Hf 4.0bpw H6 EXL2 | 2K / 35.1 GB | 12 | 1 |
CodeLlama 70B Instruct Hf | 4K / 72.3 GB | 4291 | 204 |
Code Llama 70B Python Instruct | 4K / 138.1 GB | 82 | 1 |
CodeLlama 70B Instruct Hf | 4K / 72.3 GB | 286 | 16 |
CodeLlama 70B Instruct Neuron | 4K / GB | 425 | 1 |
CodeLlama 70B Instruct Hf GGUF | 4K / 25.5 GB | 174 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐