Codegen 16B Multi by Salesforce

 ยป  All LLMs  ยป  Salesforce  ยป  Codegen 16B Multi   URL Share it on

  Arxiv:2203.13474   Autotrain compatible   Codegen   Endpoints compatible   Pytorch   Region:us

Codegen 16B Multi Benchmarks

Codegen 16B Multi (Salesforce/codegen-16B-multi)

Codegen 16B Multi Parameters and Internals

Model Type 
text generation, program synthesis
Use Cases 
Areas:
research, commercial applications
Applications:
program synthesis, code generation
Primary Use Cases:
Generating executable code from English prompts, Completing partially generated code
Limitations:
Only best at program synthesis tasks
Considerations:
Prompts should be in the form of a comment string
Supported Languages 
Python (High), C (High), C++ (High), Go (High), Java (High), JavaScript (High)
Training Details 
Data Sources:
BigQuery, GitHub repositories
Data Volume:
119.2B tokens
Methodology:
Cross-entropy loss to maximize likelihood of sequential inputs
Hardware Used:
TPU-v4-512
Model Architecture:
Autoregressive language model
LLM NameCodegen 16B Multi
Repository ๐Ÿค—https://huggingface.co/Salesforce/codegen-16B-multi 
Model Size16b
Required VRAM32.2 GB
Updated2025-02-16
MaintainerSalesforce
Model Typecodegen
Model Files  32.2 GB
Generates CodeYes
Model ArchitectureCodeGenForCausalLM
Licensebsd-3-clause
Model Max Length2048
Transformers Version4.21.0.dev0
Tokenizer ClassGPT2Tokenizer
Vocabulary Size51200
Torch Data Typefloat16
Activation Functiongelu_new

Best Alternatives to Codegen 16B Multi

Best Alternatives
Context / RAM
Downloads
Likes
Codegen2 16B P0K / 64.3 GB77245
Instruct Codegen 16B0K / 32.2 GB3521
Codegen 16B Mono Toolbench0K / 128.4 GB275
Codegen 16B Multi 6 Parts0K / 32.2 GB90
Codegen 16B Nl Sharded0K / 32.1 GB107
Fine Tuned Codegen 16B Verilog0K / 32.2 GB11712
Codegen 16B Nl0K / 32.2 GB180418
Codegen 16B Mono0K / 32.2 GB847125
Note: green Score (e.g. "73.2") means that the model is better than Salesforce/codegen-16B-multi.

Rank the Codegen 16B Multi Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43187 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227