A Radical-Based Token Representation Method for Enhancing Chinese Pre-Trained Language Models

محفوظ في:
التفاصيل البيبلوغرافية
الحاوية / القاعدة:Electronics vol. 14, no. 5 (2025), p. 1031
المؤلف الرئيسي: Qin, Honglun
مؤلفون آخرون: Li, Meiwen, Wang, Lin, Ge, Youming, Zhu, Junlong, Zheng, Ruijuan
منشور في:
MDPI AG
الموضوعات:
الوصول للمادة أونلاين:Citation/Abstract
Full Text + Graphics
Full Text - PDF
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
مستخلص:In the domain of natural language processing (NLP), a primary challenge pertains to the process of Chinese tokenization, which remains challenging due to the lack of explicit word boundaries in written Chinese. The existing tokenization methods often treat each Chinese character as an indivisible unit, neglecting the finer semantic features embedded in the characters, such as radicals. To tackle this issue, we propose a novel token representation method that integrates radical-based features into the process. The proposed method extends the vocabulary to include both radicals and original character tokens, enabling a more granular understanding of Chinese text. We also conduct experiments on seven datasets covering multiple Chinese natural language processing tasks. The results show that our method significantly improves model performance on downstream tasks. Specifically, the accuracy of BERT on the BQ Croups dataset was enhanced to 86.95%, showing an improvement of 1.65% over the baseline. Additionally, the BERT-wwm performance demonstrated a 1.28% enhancement, suggesting that the incorporation of fine-grained radical features offers a more efficacious solution for Chinese tokenization and paves the way for future research in Chinese text processing.
تدمد:2079-9292
DOI:10.3390/electronics14051031
المصدر:Advanced Technologies & Aerospace Database