Model Type | Transformer-based, Efficient Language Model |
|
Use Cases |
Limitations: | Models may produce output that is inaccurate, biased, or objectionable. |
|
Considerations: | Users must undertake thorough safety testing and implement filtering mechanisms. |
|
|
Training Details |
Data Sources: | RefinedWeb, deduplicated PILE, subset of RedPajama, subset of Dolma v1.6 |
|
Data Volume: | |
Methodology: | Layer-wise scaling strategy |
|
Model Architecture: | |
|
Responsible Ai Considerations |
Mitigation Strategies: | Electronic safeguards and user attributions are necessary. |
|
|
Input Output |
Input Format: | Tokenized text inputs as prompts. |
|
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Utilize speculative generation techniques for faster inference. |
|
|