The Dark Side of the Language: Pre-trained Transformers in the DarkNet

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Nov 17, 2023), p. n/a
Autor principal: Ranaldi, Leonardo
Otros Autores: Nourbakhsh, Aria, Patrizi, Arianna, Ruzzetti, Elena Sofia, Onorati, Dario, Fallucchi, Francesca, Zanzotto, Fabio Massimo
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Pre-trained Transformers are challenging human performances in many NLP tasks. The massive datasets used for pre-training seem to be the key to their success on existing tasks. In this paper, we explore how a range of pre-trained Natural Language Understanding models perform on definitely unseen sentences provided by classification tasks over a DarkNet corpus. Surprisingly, results show that syntactic and lexical neural networks perform on par with pre-trained Transformers even after fine-tuning. Only after what we call extreme domain adaptation, that is, retraining with the masked language model task on all the novel corpus, pre-trained Transformers reach their standard high results. This suggests that huge pre-training corpora may give Transformers unexpected help since they are exposed to many of the possible sentences.
ISSN:2331-8422
DOI:10.26615/978-954-452-092-2_102
Fuente:Engineering Database