Model Type | |
Use Cases |
Areas: | open-source community, chat-like applications |
|
Limitations: | The model may generate biased or toxic text despite efforts in safe fine-tuning., Not intended as a replacement for human judgment |
|
Considerations: | Be mindful of potential bias or toxic outputs. |
|
|
Additional Notes | Models include a helpful hand from Dakota Mahan ([@dmayhem93](https://huggingface.co/dmayhem93)) in their development. |
|
Supported Languages | |
Training Details |
Data Sources: | tatsu-lab/alpaca, nomic-ai/gpt4all_prompt_generations, Dahoas/full-hh-rlhf, jeffwan/sharegpt_vicuna, HuggingFaceH4/databricks_dolly_15k |
|
Methodology: | Supervised fine-tuning on natural language datasets focused on chat and instruction-following tasks. |
|
Context Length: | |
Model Architecture: | NeoX transformer architecture |
|
|
Responsible Ai Considerations |
Fairness: | Models are developed to adhere to safer distributions of text but cannot mitigate all biases and toxicity. |
|
Transparency: | It should not be treated as a substitute for human judgment or considered a source of truth. |
|
Accountability: | Users are responsible for the outputs generated and should use models responsibly. |
|
Mitigation Strategies: | Fine-tuning on datasets aimed at improving safety, but may not remove all biases/toxicity. |
|
|
Input Output |
Input Format: | Prompts formatted to <|SYSTEM|>...<|USER|>...<|ASSISTANT|>... |
|
Accepted Modalities: | |
Output Format: | |
|