Model Type | Mixture-of-Experts (MoE) code language model |
|
Use Cases |
Areas: | code intelligence, mathematical reasoning |
|
Primary Use Cases: | coding tasks, general language tasks |
|
|
Additional Notes | AWQ quantized version available for DeepSeek-Coder-V2-Lite-Instruct model. |
|
Supported Languages | number_of_languages (,), languages_typical_comments (Expands its support for programming languages from 86 to 338.) |
|
Training Details |
Data Sources: | high-quality, multi-source corpus |
|
Data Volume: | |
Methodology: | Mixture of Experts (MoE) approach |
|
Context Length: | |
Model Architecture: | |
|
Input Output |
Input Format: | Chat completion and code completion |
|
Accepted Modalities: | |
Output Format: | |
|