Model Type |
| ||||||||||||
Use Cases |
| ||||||||||||
Additional Notes |
| ||||||||||||
Supported Languages |
| ||||||||||||
Training Details |
|
LLM Name | Codegen 16B Nl Sharded |
Repository ๐ค | https://huggingface.co/abacaj/codegen-16B-nl-sharded |
Model Size | 16b |
Required VRAM | 32.1 GB |
Updated | 2025-03-12 |
Maintainer | abacaj |
Model Type | codegen |
Model Files | |
Generates Code | Yes |
Model Architecture | CodeGenForCausalLM |
License | bsd-3-clause |
Model Max Length | 2048 |
Transformers Version | 4.26.1 |
Tokenizer Class | GPT2Tokenizer |
Vocabulary Size | 51200 |
Torch Data Type | float16 |
Activation Function | gelu_new |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Codegen2 16B P | 0K / 64.3 GB | 756 | 45 |
Instruct Codegen 16B | 0K / 32.2 GB | 55 | 21 |
Codegen 16B Mono Toolbench | 0K / 128.4 GB | 41 | 5 |
Codegen 16B Multi 6 Parts | 0K / 32.2 GB | 17 | 0 |
Fine Tuned Codegen 16B Verilog | 0K / 32.2 GB | 192 | 13 |
Codegen 16B Nl | 0K / 32.2 GB | 2079 | 18 |
Codegen 16B Multi | 0K / 32.2 GB | 625 | 121 |
Codegen 16B Mono | 0K / 32.2 GB | 568 | 125 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐