Ct2fast Codegen2 16B by michaelfeil

 ยป  All LLMs  ยป  michaelfeil  ยป  Ct2fast Codegen2 16B   URL Share it on

  Arxiv:2305.02309   Ctranslate2   Endpoints compatible   Float16   Int8   Region:us

Ct2fast Codegen2 16B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Ct2fast Codegen2 16B (michaelfeil/ct2fast-codegen2-16B)

Ct2fast Codegen2 16B Parameters and Internals

Model Type 
program synthesis, autoregressive language model
Use Cases 
Areas:
program synthesis
Applications:
feature extraction, code generation, completion of partially generated code
Primary Use Cases:
generating executable code given English prompts
Limitations:
The model is best at program synthesis and its performance on other tasks is unknown.
Additional Notes 
This version is quantized for faster inference with Ctranslate2, reducing memory requirements by 2x-4x using int8 inference.
Supported Languages 
c (high), c++ (high), c-sharp (high), dart (high), go (high), java (high), javascript (high), kotlin (high), lua (high), php (high), python (high), ruby (high), rust (high), scala (high), shell (high), sql (high), swift (high), typescript (high), vue (high)
Training Details 
Data Sources:
the deduplicated version of the Stack dataset (v1.1)
Methodology:
cross-entropy loss for causal language modeling and file-level span corruption
Input Output 
Input Format:
text
Accepted Modalities:
text
Output Format:
text or code
LLM NameCt2fast Codegen2 16B
Repository ๐Ÿค—https://huggingface.co/michaelfeil/ct2fast-codegen2-16B 
Model Size16b
Required VRAM32.1 GB
Updated2025-02-22
Maintainermichaelfeil
Model Files  32.1 GB
Model ArchitectureAutoModel
Licenseapache-2.0
Model Max Length1024
Tokenizer ClassGPT2Tokenizer

Best Alternatives to Ct2fast Codegen2 16B

Best Alternatives
Context / RAM
Downloads
Likes
Ct2fast Codegen 16B Mono0K / 32.1 GB62
Llama 3 16B Instruct V0.1 GGUF0K / 6.4 GB6908
Tinyllama Gguf 16B0K / 2.2 GB130
Nanbeige 16B Chat 32K GGUF0K / 6.6 GB1626
Nanbeige 16B Chat GGUF0K / 6.6 GB1031
Nanbeige 16B Base 32K GGUF0K / 6.6 GB643
Nanbeige 16B Base GGUF0K / 6.6 GB681
Note: green Score (e.g. "73.2") means that the model is better than michaelfeil/ct2fast-codegen2-16B.

Rank the Ct2fast Codegen2 16B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227