Model Type | Transformer, text generation, NLP, code |
|
Use Cases |
Areas: | |
Applications: | text generation, code generation |
|
Primary Use Cases: | poem writing, email drafting, story creation, text summarization, Python code writing |
|
Limitations: | Potential to generate harmful content, Generate inaccurate code and facts, Unreliable responses to instruction, Limited scope for code |
|
Considerations: | Users should be cautious and critically evaluate outputs. |
|
|
Additional Notes | Phi-1.5-generated text/code should be treated as a starting point. Users should verify API uses manually where uncommon packages are involved. |
|
Supported Languages | |
Training Details |
Data Sources: | same data sources as phi-1, various NLP synthetic texts |
|
Data Volume: | |
Training Time: | |
Hardware Used: | |
Model Architecture: | Transformer-based with next-word prediction objective |
|
|
Responsible Ai Considerations |
Transparency: | The model has not undergone instruction fine-tuning. |
|
Mitigation Strategies: | Model is intended for research to help develop methods to reduce toxicity directly after pretraining. |
|
|
Input Output |
Input Format: | QA format, Chat format, Code format |
|
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Users should update to `transformers` version 4.37.0 or higher. |
|
|