Training Details |
Data Sources: | Danielbrdz/Barcenas-Economia, HiTZ/casimedicos-exp, somosnlp/coser_resumenes, csebuetnlp/CrossSum, Iker/Document-Translation-en-es, somosnlp/es-inclusive-language-it, glaiveai/glaive-code-assistant-v3, glaiveai/glaive-function-calling-v2, Iker/InstructTranslation-EN-ES, somosnlp/lenguaje-claro-dataset, somosnlp/LingComp_QA, Iker/NoticIA, teknium/OpenHermes-2.5, Iker/OpenHermes-2.5-Spanish, Helsinki-NLP/opus-100, projecte-aina/RAG_Multilingual, HiTZ/This-is-not-a-dataset, wikipedia, Iker/Reddit-Post-Translation |
|
Methodology: | Llama-based fine-tuning with a diverse set of datasets across Spanish and English languages to enhance multilingual abilities. |
|
Context Length: | |
Hardware Used: | |
Model Architecture: | |
|