Automatic Classification of Online Learner Reviews Via Fine-Tuned BERTs

Guardado en:
Detalles Bibliográficos
Publicado en:International Review of Research in Open and Distributed Learning vol. 26, no. 1 (Mar 2025), p. 57
Autor principal: Chen, Xieling
Otros Autores: Zou, Di, Xie, Haoran, Cheng, Gary, Li, Zongxi, Fu Lee Wang
Publicado:
International Review of Research in Open and Distance Learning
Materias:
Acceso en línea:Citation/Abstract
Full Text
Full Text - PDF
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3177605310
003 UK-CbPIL
022 |a 1492-3831 
024 7 |a 10.19173/irrodl.v26i1.8068  |2 doi 
035 |a 3177605310 
045 2 |b d20250301  |b d20250331 
084 |a 68600  |2 nlm 
100 1 |a Chen, Xieling 
245 1 |a Automatic Classification of Online Learner Reviews Via Fine-Tuned BERTs 
260 |b International Review of Research in Open and Distance Learning  |c Mar 2025 
513 |a Journal Article 
520 3 |a Massive open online courses (MOOCs) offer rich opportunities to comprehend learners’ learning experiences by examining their self-generated course evaluation content. This study investigated the effectiveness of fine-tuned BERT models for the automated classification of topics in online course reviews and explored the variations of these topics across different disciplines and course rating groups. Based on 364,660 course review sentences across 13 disciplines from Class Central, 10 topic categories were identified automatically by a BERT-BiLSTM-Attention model, highlighting the potential of fine-tuned BERTs in analysing large-scale MOOC reviews. Topic distribution analyses across disciplines showed that learners in technical fields were engaged with assessment-related issues. Significant differences in topic frequencies between high- and low-star rating courses indicated the critical role of course quality and instructor support in shaping learner satisfaction. This study also provided implications for improving learner satisfaction through interventions in course design and implementation to monitor learners’ evolving needs effectively. 
653 |a Language 
653 |a Automatic classification 
653 |a Datasets 
653 |a Deep learning 
653 |a Ontology 
653 |a Data analysis 
653 |a Automation 
653 |a Feedback 
653 |a Machine learning 
653 |a Neural networks 
653 |a Decision making 
653 |a Online instruction 
653 |a Design 
653 |a Algorithms 
653 |a Designers 
653 |a Education 
653 |a Semantics 
653 |a Learning Activities 
653 |a Literature Reviews 
653 |a Distance Education 
653 |a Sample Size 
653 |a Learning Experience 
653 |a Personal Autonomy 
653 |a Instructional Materials 
653 |a Short Term Memory 
653 |a Coding 
653 |a Online Courses 
653 |a Artificial Intelligence 
653 |a Writing Instruction 
653 |a MOOCs 
653 |a Student Writing Models 
653 |a Language Processing 
653 |a Course Content 
653 |a Learner Engagement 
653 |a Educational Facilities Improvement 
653 |a Attention 
700 1 |a Zou, Di 
700 1 |a Xie, Haoran 
700 1 |a Cheng, Gary 
700 1 |a Li, Zongxi 
700 1 |a Fu Lee Wang 
773 0 |t International Review of Research in Open and Distributed Learning  |g vol. 26, no. 1 (Mar 2025), p. 57 
786 0 |d ProQuest  |t Education Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3177605310/abstract/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3177605310/fulltext/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3177605310/fulltextPDF/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://eric.ed.gov/?id=EJ1463472