Model Type | text-generation, text-summarization |
|
Use Cases |
Areas: | Research, Companionship, Long text summarization |
|
Applications: | Book summarization, Comprehensive bulleted notes |
|
Primary Use Cases: | Psychology text summarization |
|
Limitations: | Does not engage in roleplay or romance |
|
|
Additional Notes | Dataset contains some improperly escaped characters, noted by the developer. |
|
Supported Languages | |
Training Details |
Data Sources: | Samantha-1.1 dataset, 5000 document-output example pairs |
|
Data Volume: | |
Methodology: | Trained with fine-tuning on Samantha-1.1 dataset |
|
Training Time: | |
Hardware Used: | |
Model Architecture: | based on mistral-7b-instruct |
|
|
Safety Evaluation |
Methodologies: | Conversational restrictions in place |
|
Risk Categories: | |
Ethical Considerations: | Avoids topics of romance, roleplay, illegal activities |
|
|
Responsible Ai Considerations |
Fairness: | |
Transparency: | Open source, with extensive documentation and script access |
|
Accountability: | Cognitive Computations is accountable |
|
Mitigation Strategies: | No romance, roleplay, or illegal activity engagement, clearly expressed system prompts |
|
|
Input Output |
Input Format: | |
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Ensure input text is clearly structured for best summaries |
|
|
Release Notes |
Version: | |
Date: | |
Notes: | First successful fine-tune for comprehensive bulleted notes |
|
|
|