Model Type | |
Use Cases |
Areas: | |
Primary Use Cases: | Researching language models for low-resource languages |
|
Limitations: | Not suitable for translation or generating text in other languages, Not fine-tuned for downstream contexts |
|
Considerations: | Perform risk and bias assessment before use in real-world applications. |
|
|
Supported Languages | |
Training Details |
Data Sources: | Pt-Corpus Instruct (6.2B tokens) |
|
Data Volume: | |
Methodology: | Transformer-based model pre-trained via causal language modeling |
|
Context Length: | |
Training Time: | |
Hardware Used: | |
Model Architecture: | |
|
Responsible Ai Considerations |
Fairness: | The model may produce biased or toxic content due to inherited stereotypes from the training data. |
|
|
Input Output |
Accepted Modalities: | |
Performance Tips: | Ensure proper configuration of repetition penalty and generation parameters like temperature, top-k, and top-p to optimize outputs. |
|
|