CodeLlama 13B Hf by codellama

 Β»  All LLMs  Β»  codellama  Β»  CodeLlama 13B Hf   URL Share it on

  Arxiv:2308.12950   Autotrain compatible   Code   Codegen   Endpoints compatible   Llama   Llama2   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

CodeLlama 13B Hf Benchmarks

CodeLlama 13B Hf (codellama/CodeLlama-13b-hf)

CodeLlama 13B Hf Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
commercial, research
Applications:
code synthesis, understanding tasks, handling Python programming
Primary Use Cases:
code assistant and generation applications
Limitations:
Use in languages other than English, Using violating laws or regulations
Considerations:
Follow Acceptable Use Policy and Licensing Agreement
Additional Notes 
This is a non-official Code Llama repo. Official repository available at Meta's Llama organization on Hugging Face.
Supported Languages 
English (high proficiency)
Training Details 
Data Sources:
same data as Llama 2
Data Volume:
Not specified
Methodology:
Pre-trained and fine-tuned generative text models
Training Time:
January 2023 to July 2023
Hardware Used:
Meta’s Research Super Cluster with A100-80GB GPUs
Model Architecture:
auto-regressive language model using an optimized transformer architecture
Safety Evaluation 
Methodologies:
community feedback
Risk Categories:
misinformation, bias
Ethical Considerations:
Potential to produce inaccurate or objectionable responses
Responsible Ai Considerations 
Fairness:
Testing conducted only in English; may not cover all scenarios
Transparency:
Outputs cannot be predicted in advance
Accountability:
Developers should perform safety testing and application tuning
Mitigation Strategies:
Responsible Use Guide available at Meta's website
Input Output 
Input Format:
text only
Accepted Modalities:
text
Output Format:
text only
LLM NameCodeLlama 13B Hf
Repository πŸ€—https://huggingface.co/codellama/CodeLlama-13b-hf 
Model Size13b
Required VRAM26 GB
Updated2025-02-05
Maintainercodellama
Model Typellama
Model Files  9.9 GB: 1-of-3   9.9 GB: 2-of-3   6.2 GB: 3-of-3   9.9 GB: 1-of-3   9.9 GB: 2-of-3   6.5 GB: 3-of-3
Supported Languagescode
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length16384
Model Max Length16384
Transformers Version4.32.0.dev0
Tokenizer ClassCodeLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32016
Torch Data Typebfloat16

Quantized Models of the CodeLlama 13B Hf

Model
Likes
Downloads
VRAM
CodeLlama 13B GGUF6053885 GB
CodeLlama 13B AWQ41327 GB
CodeLlama 13B GGML33145 GB
CodeLlama 13B GPTQ11607 GB

Best Alternatives to CodeLlama 13B Hf

Best Alternatives
Context / RAM
Downloads
Likes
NexusRaven V2 13B16K / 26 GB3904466
CodeLlama 13B Instruct Hf16K / 26 GB21714145
CodeLlama 13B MORepair16K / 26 GB262
CodeLlama 13B Hf16K / 26 GB75480
CodeLlama 13B Instruct Hf16K / 26 GB120520
CodeLlama 13B Python Hf16K / 26 GB265449
...ma 13B Hf Truncated Embeddings16K / 52.3 GB50
Tora Code 13B V1.016K / 26 GB125714
Codellama 13B Oasst Sft V1016K / 25.9 GB236066
...ma Airoboros Orca Platypus 13B16K / 26 GB12780
Note: green Score (e.g. "73.2") means that the model is better than codellama/CodeLlama-13b-hf.

Rank the CodeLlama 13B Hf Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227