An Experiment with LLMs as Database Design Tutors: Persistent Equity and Fairness Challenges in Online Learning
Збережено в:
| Опубліковано в:: | Education Sciences vol. 15, no. 3 (2025), p. 386 |
|---|---|
| Автор: | |
| Опубліковано: |
MDPI AG
|
| Предмети: | |
| Онлайн доступ: | Citation/Abstract Full Text + Graphics Full Text - PDF |
| Теги: |
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
| Короткий огляд: | As large language models (LLMs) continue to evolve, their capacity to replace humans as their surrogates is also improving. As increasing numbers of intelligent tutoring systems (ITSs) are embracing the integration of LLMs for digital tutoring, questions are arising as to how effective they are and if their hallucinatory behaviors diminish their perceived advantages. One critical question that is seldom asked if the availability, plurality, and relative weaknesses in the reasoning process of LLMs are contributing to the much discussed digital divide and equity and fairness in online learning. In this paper, we present an experiment with database design theory assignments and demonstrate that while their capacity to reason logically is improving, LLMs are still prone to serious errors. We demonstrate that in online learning and in the absence of a human instructor, LLMs could introduce inequity in the form of “wrongful” tutoring that could be devastatingly harmful for learners, which we call ignorant bias, in increasingly popular digital learning. We also show that significant challenges remain for STEM subjects, especially for subjects for which sound and free online tutoring systems exist. Based on the set of use cases, we formulate a possible direction for an effective ITS for online database learning classes of the future. |
|---|---|
| ISSN: | 2227-7102 2076-3344 |
| DOI: | 10.3390/educsci15030386 |
| Джерело: | Education Database |