LLM Name | Codegen 2B Mono Finetuned Python 18K Alpaca Full Dataset |
Repository ๐ค | https://huggingface.co/Vellimani/codegen-2B-mono-finetuned-python-18k-alpaca-full-dataset |
Model Size | 2b |
Required VRAM | 5.6 GB |
Updated | 2025-02-22 |
Maintainer | Vellimani |
Model Type | codegen |
Model Files | |
Generates Code | Yes |
Model Architecture | CodeGenForCausalLM |
Model Max Length | 2048 |
Transformers Version | 4.40.0 |
Tokenizer Class | GPT2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 50295 |
Torch Data Type | float16 |
Activation Function | gelu_new |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Archgen 2B V1 | 0K / 5.6 GB | 8 | 0 |
Salesforce Codegen 2B Multi Ov | 0K / 11.1 GB | 9 | 0 |
Nsql 2B | 0K / 11.3 GB | 87 | 9 |
Instruct Codegen 2B Multi | 0K / 11.3 GB | 13 | 1 |
Diff Codegen 2B V2 | 0K / 5.7 GB | 44 | 6 |
...en 2B Mono Instruct Py Revised | 0K / 11.3 GB | 2 | 1 |
Codegen 2B Mono Xlcost | 0K / 5.7 GB | 13 | 1 |
Codegen 2B Multi Xlcost | 0K / 5.7 GB | 108 | 1 |
Fine Tuned Codegen 2B Verilog | 0K / 11.3 GB | 257 | 7 |
Codegen 2B Multi | 0K / 5.7 GB | 2771 | 36 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐