Model Type | |
Use Cases |
Areas: | research, specialization, fine-tuning |
|
Applications: | summarization, text generation, chatbot |
|
Primary Use Cases: | text generation across supported languages |
|
Limitations: | Limited generalization to languages outside the trained set. |
|
Considerations: | Appropriate precautions for production uses. |
|
|
Additional Notes | Ensure evaluation of harm and biases for any production deployments. |
|
Supported Languages | es (fluent), fr (fluent), de (fluent), no (fluent), sv (fluent), da (fluent), nl (fluent), pt (fluent), pl (fluent), ro (fluent), it (fluent), cs (fluent) |
|
Training Details |
Data Sources: | wikimedia/wikipedia subsets of 11 languages |
|
Data Volume: | |
Methodology: | Pruning using PruneMe with analysis across multiple languages |
|
Model Architecture: | Transformed from Falcon-11B using passthrough merge method |
|
|
Safety Evaluation |
Methodologies: | layer similarity analysis |
|
Findings: | model carries typical online stereotypes and biases |
|
Risk Categories: | |
Ethical Considerations: | Model trained on large-scale, web-representative corpora; potential presence of biases. |
|
|
Responsible Ai Considerations |
Fairness: | Ensure model deployment evaluates fairness and bias. |
|
Transparency: | Pruning methodolgy is documented, but not easy to reverse. |
|
Accountability: | Deploying organization should be accountable for harm from outputs. |
|
Mitigation Strategies: | Finetuning and guardrails recommended. |
|
|
Input Output |
Input Format: | text input in any of the supported languages. |
|
Accepted Modalities: | |
Output Format: | generated text based on input prompt. |
|
Performance Tips: | Fine-tuning recommended for specific domain applications. |
|
|