Codellama 7B Instruct GGUF by DevShubham

 Β»  All LLMs  Β»  DevShubham  Β»  Codellama 7B Instruct GGUF   URL Share it on

  Arxiv:2308.12950 Base model:codellama/codellama... Base model:quantized:codellama...   Code   Codegen   Gguf   Instruct   Llama   Llama2   Quantized   Region:us

Codellama 7B Instruct GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Codellama 7B Instruct GGUF (DevShubham/Codellama-7B-Instruct-GGUF)

Codellama 7B Instruct GGUF Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
commercial, research
Primary Use Cases:
code synthesis, understanding tasks
Limitations:
Use in languages other than English, Use in violation of applicable laws
Considerations:
Refer to the Responsible Use Guide
Additional Notes 
In aggregate, training all 9 Code Llama models required 400K GPU hours of computation, and the estimated total emissions were 65.3 tCO2eq, 100% offset by Meta’s sustainability program.
Training Details 
Data Sources:
same data as Llama 2 with different weights
Hardware Used:
A100-80GB (TDP of 350-400W)
Model Architecture:
auto-regressive language model using an optimized transformer architecture
LLM NameCodellama 7B Instruct GGUF
Repository πŸ€—https://huggingface.co/DevShubham/Codellama-7B-Instruct-GGUF 
Model NameCodeLlama 13B Instruct
Model CreatorMeta
Base Model(s)  CodeLlama 13B Instruct Hf   codellama/CodeLlama-13b-Instruct-hf
Model Size13b
Required VRAM2.8 GB
Updated2025-01-16
MaintainerDevShubham
Model Typellama
Instruction-BasedYes
Model Files  2.8 GB   3.6 GB   3.3 GB   3.0 GB   3.8 GB   4.1 GB   3.9 GB   4.7 GB   4.8 GB   4.7 GB   5.5 GB   7.2 GB
Supported Languagescode
GGUF QuantizationYes
Quantization Typegguf
Generates CodeYes
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Codellama 7B Instruct GGUF

Best Alternatives
Context / RAM
Downloads
Likes
Codellama 13B Instruct GGUF0K / 13.8 GB180
CodeLlama 13B Instruct GGUF0K / 5.4 GB4362118
CodeLlama 13B Instruct GGML0K / 5.7 GB1319
...fast Codellama 13B Instruct Hf0K / 13 GB71

Rank the Codellama 7B Instruct GGUF Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227