LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

CodeLlama 34B Python Hf GGUF by MaziyarPanahi

What open-source LLMs or SLMs are you in search of? 18870 in total.

 ยป  All LLMs  ยป  MaziyarPanahi  ยป  CodeLlama 34B Python Hf GGUF   URL Share it on

  Arxiv:2308.12950   2-bit   3-bit   4-bit   5-bit   6-bit   8-bit   Autotrain compatible Base model:codellama/codellama...   Code   Codegen   Endpoints compatible   Gguf   Has space   License:apache-2.0   License:llama2   Llama   Llama2   Pytorch   Quantized   Region:us   Safetensors

CodeLlama 34B Python Hf GGUF Benchmarks

Rank the CodeLlama 34B Python Hf GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
CodeLlama 34B Python Hf GGUF (MaziyarPanahi/CodeLlama-34b-Python-hf-GGUF)

Best Alternatives to CodeLlama 34B Python Hf GGUF

Best Alternatives
HF Rank
CodeLlama 34B Instruct Hf GGUF50.20K / 12.5 GB3380
CodeLlama 34B Hf GGUF48.50K / 12.5 GB3193
CodeLlama 34B GGUF48.50K / 14.2 GB2854
...chless Codellama 34B V2.0 GGUF46.10K / 14.2 GB68
Phind CodeLlama 34B V2 GGUF0K / 14.2 GB99147
CodeLlama 34B Instruct GGUF0K / 14.2 GB4190
CodeLlama 34B Python GGUF0K / 14.2 GB431
CodeFuse CodeLlama 34B GGUF0K / 14.2 GB418
...mantha 1.11 CodeLlama 34B GGUF0K / 14.2 GB817
...d CodeLlama 34B Python V1 GGUF0K / 14.2 GB1413
Note: green Score (e.g. "73.2") means that the model is better than MaziyarPanahi/CodeLlama-34b-Python-hf-GGUF.

CodeLlama 34B Python Hf GGUF Parameters and Internals

LLM NameCodeLlama 34B Python Hf GGUF
RepositoryOpen on ๐Ÿค— 
Model NameCodeLlama-34b-Python-hf-GGUF
Model Creatorcodellama
Base Model(s)  CodeLlama 34B Python Hf   codellama/CodeLlama-34b-Python-hf
Model Size34b
Required VRAM12.5 GB
Model Typellama
Model Files  12.5 GB   17.8 GB   16.3 GB   14.6 GB   20.2 GB   19.2 GB   23.8 GB   23.2 GB   27.7 GB   35.9 GB
GGUF QuantizationYes
Quantization Typegguf
Generates CodeYes
Model ArchitectureAutoModel
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003