Model Type | |
Use Cases |
Areas: | Research, Protein sequence analysis |
|
Applications: | Protein embedding, Protein structure prediction |
|
Primary Use Cases: | Generating meaningful protein embeddings from sequences |
|
|
Additional Notes | Derived by simplifying model layers and hidden size specifically for proteins. |
|
Supported Languages | language (Protein sequence), proficiency (N/A) |
|
Training Details |
Data Sources: | |
Data Volume: | |
Methodology: | Pretrained on protein data |
|
Model Architecture: | Transformer with Grouped-Query Attention, Sliding-Window Attention, Byte-fallback BPE tokenizer, Mixture of Experts |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Ensure stable version of Transformers is used |
|
|