AI chatbots in programming education: guiding success or encouraging plagiarism

Salvato in:
Dettagli Bibliografici
Pubblicato in:Discover Artificial Intelligence vol. 4, no. 1 (Dec 2024), p. 87
Autore principale: Akçapınar, Gökhan
Altri autori: Sidan, Elif
Pubblicazione:
Springer Nature B.V.
Soggetti:
Accesso online:Citation/Abstract
Full Text
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!

MARC

LEADER 00000nab a2200000uu 4500
001 3131665308
003 UK-CbPIL
022 |a 2731-0809 
024 7 |a 10.1007/s44163-024-00203-7  |2 doi 
035 |a 3131665308 
045 2 |b d20241201  |b d20241231 
100 1 |a Akçapınar, Gökhan  |u Hacettepe University, Department of Computer Education and Instructional Technology, Ankara, Türkiye (GRID:grid.14442.37) (ISNI:0000 0001 2342 7339) 
245 1 |a AI chatbots in programming education: guiding success or encouraging plagiarism 
260 |b Springer Nature B.V.  |c Dec 2024 
513 |a Journal Article 
520 3 |a This study examines the impact of an AI programming assistant on students' exam scores and their tendency to accept incorrect AI-generated information. The customized AI programming assistant was developed by the authors using GPT based Large Language Model (LLM). A one group pretest–posttest quasi-experimental design was utilized to answer research questions. Students were asked to take identical programming exams twice: once without AI assistance and once with the option to use the AI assistant. Results showed that the students’ average exam scores significantly increased from 48.33 to 74.47 with a large effect size (d = 1.56) when they used the AI assistance. On the other hand, when student—AI interaction logs were analyzed for a specific question, it was found that AI generated incorrect answers to 36 students. Thirty-three of these students (92%) answered the question incorrectly. Even more interestingly, despite the AI-generated response containing an obvious error, 22 of them (61%) copied and pasted the AI's response directly into the answer field. Only 3 students (8%) ignored the incorrect response generated by the AI and answered the question correctly. A significant portion of students accepting incorrect information provided by AI underscores the need for careful integration of AI tools into learning environments. Moreover, our findings emphasize the importance of specially developed AI tools rather than free tools like ChatGPT in exploring the new type of interaction between students and AI. 
653 |a Object oriented programming 
653 |a Teaching 
653 |a Programming languages 
653 |a Students 
653 |a Plagiarism 
653 |a Automation 
653 |a Python 
653 |a Large language models 
653 |a Generative artificial intelligence 
653 |a Learning 
653 |a Education 
653 |a Chatbots 
700 1 |a Sidan, Elif  |u Hacettepe University, Department of Computer Education and Instructional Technology, Ankara, Türkiye (GRID:grid.14442.37) (ISNI:0000 0001 2342 7339) 
773 0 |t Discover Artificial Intelligence  |g vol. 4, no. 1 (Dec 2024), p. 87 
786 0 |d ProQuest  |t Research Library 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3131665308/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3131665308/fulltext/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3131665308/fulltextPDF/embedded/ZKJTFFSVAI7CB62C?source=fedsrch