How Large Language Models Enhance Topic Modeling on User-Generated Content

محفوظ في:
التفاصيل البيبلوغرافية
الحاوية / القاعدة:Journal of Physics: Conference Series vol. 3114, no. 1 (Sep 2025), p. 012011
المؤلف الرئيسي: Bui, Minh Phuoc
مؤلفون آخرون: Nguyen, Mien Thi Ngoc
منشور في:
IOP Publishing
الموضوعات:
الوصول للمادة أونلاين:Citation/Abstract
Full Text - PDF
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
مستخلص:Understanding user-generated content (UGC) is crucial for obtaining actionable insights in domains such as e-commerce and hospitality. However, the noisy and redundant nature of such content present challenges for topic modeling methods like Latent Semantic Analysis (LSA). In this paper, we investigate whether preprocessing user reviews with large language models (LLMs) can improve topic modeling performance. Specifically, we compare two input variants: (1) raw reviews and (2) ChatGPT-generated summaries produced via API as concise keyphrases. We apply LSA with varimax rotation on each variant and evaluate the resulting topic models using multiple criteria, including topic coherence (cυ), average pairwise Jaccard overlap, and cluster compactness via silhouette scores. Unlike prior work that employs LLMs primarily for post hoc topic labeling or interpretation, our method integrates an LLM directly into the preprocessing pipeline to reshape noisy input into structured, standardized summaries. While ChatGPT-based preprocessing results in lower cυ coherence scores likely due to reduced lexical redundancy, it significantly improves topic separation, cluster quality, and topical specificity, leading to more interpretable and well-structured topic models overall.
تدمد:1742-6588
1742-6596
DOI:10.1088/1742-6596/3114/1/012011
المصدر:Advanced Technologies & Aerospace Database