Feature Substitution Using Latent Dirichlet Allocation for Text Classification

保存先:
書誌詳細
出版年:International Journal of Advanced Computer Science and Applications vol. 16, no. 1 (2025)
第一著者: PDF
出版事項:
Science and Information (SAI) Organization Limited
主題:
オンライン・アクセス:Citation/Abstract
Full Text - PDF
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!
その他の書誌記述
抄録:Text classification plays a pivotal role in natural language processing, enabling applications such as product categorization, sentiment analysis, spam detection, and document organization. Traditional methods, including bag-of-words and TF-IDF, often lead to high-dimensional feature spaces, increasing computational complexity and susceptibility to overfitting. This study introduces a novel Feature Substitution technique using Latent Dirichlet Allocation (FS-LDA), which enhances text representation by replacing non-overlapping high-probability topic words. FS-LDA effectively reduces dimensionality while retaining essential semantic features, optimizing classification accuracy and efficiency. Experimental evaluations on five e-commerce datasets and an SMS spam dataset demonstrated that FS-LDA, combined with Hidden Markov Models (HMMs), achieved up to 95% classification accuracy in binary tasks and significant improvements in macro and weighted F1-scores for multiclass tasks. The innovative approach lies in FS-LDA's ability to seamlessly integrate dimensionality reduction with feature substitution, while its predictive advantage is demonstrated through consistent performance enhancement across diverse datasets. Future work will explore its application to other classification models and domains, such as social media analysis and medical document categorization, to further validate its scalability and robustness.
ISSN:2158-107X
2156-5570
DOI:10.14569/IJACSA.2025.01601105
ソース:Advanced Technologies & Aerospace Database