Model Type |
| |||||||||
Use Cases |
| |||||||||
Supported Languages |
| |||||||||
Training Details |
| |||||||||
Input Output |
|
LLM Name | Codegen2 16B P |
Repository ๐ค | https://huggingface.co/Salesforce/codegen2-16B_P |
Model Size | 16b |
Required VRAM | 64.3 GB |
Updated | 2025-02-09 |
Maintainer | Salesforce |
Model Type | codegen |
Model Files | |
Generates Code | Yes |
Model Architecture | CodeGenForCausalLM |
License | apache-2.0 |
Model Max Length | 1024 |
Transformers Version | 4.25.1 |
Tokenizer Class | GPT2Tokenizer |
Vocabulary Size | 51200 |
Torch Data Type | float32 |
Activation Function | gelu_new |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Instruct Codegen 16B | 0K / 32.2 GB | 18 | 21 |
Codegen 16B Mono Toolbench | 0K / 128.4 GB | 16 | 5 |
Codegen 16B Multi 6 Parts | 0K / 32.2 GB | 6 | 0 |
Codegen 16B Nl Sharded | 0K / 32.1 GB | 7 | 7 |
Fine Tuned Codegen 16B Verilog | 0K / 32.2 GB | 114 | 12 |
Codegen 16B Nl | 0K / 32.2 GB | 1651 | 18 |
Codegen 16B Mono | 0K / 32.2 GB | 1020 | 125 |
Codegen 16B Multi | 0K / 32.2 GB | 462 | 120 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐