CodeLlama 70B Hf by meta-llama

 ยป  All LLMs  ยป  meta-llama  ยป  CodeLlama 70B Hf   URL Share it on

  Arxiv:2308.12950   Autotrain compatible   Code   Codegen   Endpoints compatible   Facebook   Llama   Llama2   Meta   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

CodeLlama 70B Hf Benchmarks

nn.n% โ€” How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Sponsored by Nebius

CodeLlama 70B Hf Parameters and Internals

Model Type 
Generative text model, Code synthesis
Use Cases 
Areas:
Commercial, Research
Applications:
Code synthesis and understanding, Python programming language support, Code assistant and generation applications
Primary Use Cases:
Commercial and research use in code synthesis and understanding
Limitations:
Not tested exhaustively in languages other than English or in all possible scenarios, May produce inaccurate or objectionable outcomes
Considerations:
Safety testing and tuning should be performed before deploying applications.
Additional Notes 
Code Llama models are developed and released by Meta. They are available in sizes of 7B, 13B, 34B, and 70B parameters. See the Responsible Use Guide on the provided URL for more guidance.
Supported Languages 
languages_supported (English and programming languages), proficiency_levels (High)
Training Details 
Data Sources:
Offline datasets
Methodology:
Fine-tuned with up to 16k tokens; supports up to 100k tokens at inference time.
Context Length:
100000
Hardware Used:
A100-80GB GPUs
Model Architecture:
Auto-regressive language model using optimized transformer architecture
Safety Evaluation 
Methodologies:
Safety evaluations as described in Section 4 of the research paper
Ethical Considerations:
Ethical considerations include risks associated with a new technology that may produce inaccurate or objectionable outputs.
Input Output 
Input Format:
Text
Output Format:
Text
LLM NameCodeLlama 70B Hf
Repository ๐Ÿค—https://huggingface.co/meta-llama/CodeLlama-70b-hf 
Model Size70b
Required VRAM77โ€ฏGB
Updated2025-03-15
Maintainermeta-llama
Model Typellama
Model Files  4.7 GB: 1-of-29   4.7 GB: 2-of-29   5.0 GB: 3-of-29   5.0 GB: 4-of-29   4.7 GB: 5-of-29   4.7 GB: 6-of-29   4.7 GB: 7-of-29   5.0 GB: 8-of-29   5.0 GB: 9-of-29   4.7 GB: 10-of-29   4.7 GB: 11-of-29   4.7 GB: 12-of-29   5.0 GB: 13-of-29   5.0 GB: 14-of-29   4.7 GB: 15-of-29   4.7 GB: 16-of-29   4.7 GB: 17-of-29   5.0 GB: 18-of-29   5.0 GB: 19-of-29   4.7 GB: 20-of-29   4.7 GB: 21-of-29   4.7 GB: 22-of-29   5.0 GB: 23-of-29   5.0 GB: 24-of-29   4.7 GB: 25-of-29   4.7 GB: 26-of-29   4.7 GB: 27-of-29   5.0 GB: 28-of-29   3.8 GB: 29-of-29   4.7 GB: 1-of-29   4.7 GB: 2-of-29   5.0 GB: 3-of-29   5.0 GB: 4-of-29   4.7 GB: 5-of-29   4.7 GB: 6-of-29   4.7 GB: 7-of-29   5.0 GB: 8-of-29   5.0 GB: 9-of-29   4.7 GB: 10-of-29   4.7 GB: 11-of-29   4.7 GB: 12-of-29   5.0 GB: 13-of-29   5.0 GB: 14-of-29   4.7 GB: 15-of-29   4.7 GB: 16-of-29
Supported Languagescode
Generates CodeYes
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length16384
Model Max Length16384
Transformers Version4.37.1
Vocabulary Size32016
Torch Data Typebfloat16

Best Alternatives to CodeLlama 70B Hf

Best Alternatives
Context / RAM
Downloads
Likes
CodeLlama 70B Hf16K / 77โ€‰GB811316
Sqlcoder 70B Alpha16K / 138.7โ€‰GB667222
...enbuddy Codellama 70B V17.1 4K16K / 138.9โ€‰GB171
BigCodeLlama 169B16K / 338.2โ€‰GB914
CodeLlama 70B Instruct Hf4K / 72.3โ€‰GB6551207
CodeLlama 70B Instruct Hf4K / 72.3โ€‰GB1097716
Code Llama 70B Python Instruct4K / 138.1โ€‰GB371
CodeLlama 70B Python Hf4K / 77โ€‰GB207108
CodeRosa 70B AB14K / 61.3โ€‰GB491
CodeLlama 70B Python Hf4K / 77โ€‰GB9912
Note: green Score (e.g. "73.2") means that the model is better than meta-llama/CodeLlama-70b-hf.

Rank the CodeLlama 70B Hf Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
โ—โ—โ—โ—
Factuality and Completeness of Knowledge  
โ—โ—โ—โ—
Censorship and Alignment  
โ—โ—โ—โ—
Data Analysis and Insight Generation  
โ—โ—โ—โ—
Text Generation  
โ—โ—โ—โ—
Text Summarization and Feature Extraction  
โ—โ—โ—โ—
Code Generation  
โ—โ—โ—โ—
Multi-Language Support and Translation  
โ—โ—โ—โ—

What open-source LLMs or SLMs are you in search of? 45095 in total.

Our Social Media โ†’  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227