Automatic Classification of Online Learner Reviews Via Fine-Tuned BERTs

Guardado en:
Detalles Bibliográficos
Publicado en:International Review of Research in Open and Distributed Learning vol. 26, no. 1 (Mar 2025), p. 57
Autor principal: Chen, Xieling
Otros Autores: Zou, Di, Xie, Haoran, Cheng, Gary, Li, Zongxi, Fu Lee Wang
Publicado:
International Review of Research in Open and Distance Learning
Materias:
Acceso en línea:Citation/Abstract
Full Text
Full Text - PDF
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Massive open online courses (MOOCs) offer rich opportunities to comprehend learners’ learning experiences by examining their self-generated course evaluation content. This study investigated the effectiveness of fine-tuned BERT models for the automated classification of topics in online course reviews and explored the variations of these topics across different disciplines and course rating groups. Based on 364,660 course review sentences across 13 disciplines from Class Central, 10 topic categories were identified automatically by a BERT-BiLSTM-Attention model, highlighting the potential of fine-tuned BERTs in analysing large-scale MOOC reviews. Topic distribution analyses across disciplines showed that learners in technical fields were engaged with assessment-related issues. Significant differences in topic frequencies between high- and low-star rating courses indicated the critical role of course quality and instructor support in shaping learner satisfaction. This study also provided implications for improving learner satisfaction through interventions in course design and implementation to monitor learners’ evolving needs effectively.
ISSN:1492-3831
DOI:10.19173/irrodl.v26i1.8068
Fuente:Education Database