An ensemble deep learning model for author identification through multiple features

Сохранить в:
Библиографические подробности
Опубликовано в::Scientific Reports (Nature Publisher Group) vol. 15, no. 1 (2025), p. 26477-26491
Главный автор: Zhang, Yuan
Опубликовано:
Nature Publishing Group
Предметы:
Online-ссылка:Citation/Abstract
Full Text
Full Text - PDF
Метки: Добавить метку
Нет меток, Требуется 1-ая метка записи!

MARC

LEADER 00000nab a2200000uu 4500
001 3231999410
003 UK-CbPIL
022 |a 2045-2322 
024 7 |a 10.1038/s41598-025-11596-5  |2 doi 
035 |a 3231999410 
045 2 |b d20250101  |b d20251231 
084 |a 274855  |2 nlm 
100 1 |a Zhang, Yuan  |u School of Humanities and Social Science, Xi’an Jiaotong University, 710049, Xi’an, Shaanxi Province, China (ROR: https://ror.org/017zhmm22) (GRID: grid.43169.39) (ISNI: 0000 0001 0599 1243); School of Humanities and and Foreign Languages, Xi’an University of Technology, 710049, Xi’an, Shaanxi Province, China (ROR: https://ror.org/038avdt50) (GRID: grid.440722.7) (ISNI: 0000 0000 9591 9677) 
245 1 |a An ensemble deep learning model for author identification through multiple features 
260 |b Nature Publishing Group  |c 2025 
513 |a Journal Article 
520 3 |a One of the challenges in the natural language processing is authorship identification. The proposed research will improve the accuracy and stability of authorship identification by creating a new deep learning framework that combines the features of various types in a self-attentive weighted ensemble framework. Our approach enhances generalization to a great extent by combining a wide range of writing styles representations such as statistical features, TF-IDF vectors, and Word2Vec embeddings. The different sets of features are fed through separate Convolutional Neural Networks (CNN) so that the specific stylistic features can be extracted. More importantly, a self-attention mechanism is presented to smartly combine the results of these specialized CNNs so that the model can dynamically learn the significance of each type of features. The summation of the representation is then passed into a weighted SoftMax classifier with the aim of optimizing performance by taking advantage of the strengths of individual branches of the neural network. The suggested model was intensively tested on two different datasets, Dataset A, which included four authors, and Dataset B, which included thirty authors. Our method performed better than the baseline state-of-the-art methods by at least 3.09% and 4.45% on Dataset A and Dataset B respectively with accuracy of 80.29% and 78.44%, respectively. This self-attention-augmented multi-feature ensemble approach is very effective, with significant gains in state-of-the-art accuracy and robustness metrics of author identification. 
653 |a Machine learning 
653 |a Accuracy 
653 |a Datasets 
653 |a Deep learning 
653 |a Nonparametric statistics 
653 |a Writing 
653 |a Regression analysis 
653 |a Hypothesis testing 
653 |a Data mining 
653 |a Identification 
653 |a Stylistics 
653 |a Linguistics 
653 |a Multilingualism 
653 |a Natural language processing 
653 |a Methods 
653 |a Algorithms 
653 |a Large language models 
653 |a Neural networks 
653 |a Information retrieval 
653 |a Case studies 
653 |a Economic 
773 0 |t Scientific Reports (Nature Publisher Group)  |g vol. 15, no. 1 (2025), p. 26477-26491 
786 0 |d ProQuest  |t Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3231999410/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3231999410/fulltext/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3231999410/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch