CodeLlama 70B Instruct Hf 4bit MLX by mlx-community

 ยป  All LLMs  ยป  mlx-community  ยป  CodeLlama 70B Instruct Hf 4bit MLX   URL Share it on

  4bit   Code   Codegen   Conversational   Instruct   Llama   Llama2   Mlx   Quantized   Region:us   Sharded   Tensorflow

CodeLlama 70B Instruct Hf 4bit MLX Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
CodeLlama 70B Instruct Hf 4bit MLX (mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX)

CodeLlama 70B Instruct Hf 4bit MLX Parameters and Internals

Model Type 
text generation, code generation
Additional Notes 
Model converted to MLX format for use with 'mlx-lm' package.
Input Output 
Accepted Modalities:
code
LLM NameCodeLlama 70B Instruct Hf 4bit MLX
Repository ๐Ÿค—https://huggingface.co/mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX 
Model Size70b
Required VRAM39.1 GB
Updated2024-12-22
Maintainermlx-community
Model Typellama
Instruction-BasedYes
Model Files  5.3 GB: 1-of-8   5.3 GB: 2-of-8   5.3 GB: 3-of-8   5.3 GB: 4-of-8   5.3 GB: 5-of-8   5.3 GB: 6-of-8   5.3 GB: 7-of-8   2.0 GB: 8-of-8
Supported Languagescode
Quantization Type4bit
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.36.2
Vocabulary Size32016
Torch Data Typebfloat16

Best Alternatives to CodeLlama 70B Instruct Hf 4bit MLX

Best Alternatives
Context / RAM
Downloads
Likes
...70B Instruct Nf4 Fp16 Upscaled4K / 138.7 GB4092
...70B Instruct Hf 5.0bpw H6 EXL22K / 43.6 GB76
...0B Instruct Hf 2.65bpw H6 EXL22K / 23.4 GB93
...70B Instruct Hf 2.4bpw H6 EXL22K / 21.3 GB101
...70B Instruct Hf 4.0bpw H6 EXL22K / 35.1 GB121
CodeLlama 70B Instruct Hf4K / 72.3 GB4291204
Code Llama 70B Python Instruct4K / 138.1 GB821
CodeLlama 70B Instruct Hf4K / 72.3 GB28616
CodeLlama 70B Instruct Neuron4K /  GB4251
CodeLlama 70B Instruct Hf GGUF4K / 25.5 GB1742
Note: green Score (e.g. "73.2") means that the model is better than mlx-community/CodeLlama-70b-Instruct-hf-4bit-MLX.

Rank the CodeLlama 70B Instruct Hf 4bit MLX Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217