MARC

LEADER 00000nab a2200000uu 4500
001 3275510607
003 UK-CbPIL
022 |a 2227-7102 
022 |a 2076-3344 
024 7 |a 10.3390/educsci15111507  |2 doi 
035 |a 3275510607 
045 2 |b d20250101  |b d20251231 
084 |a 231457  |2 nlm 
100 1 |a Watts, Field M  |u Educational Testing Service (ETS), 660 Rosedale Road, Princeton, NJ 08541, USA 
245 1 |a A Framework for Designing an AI Chatbot to Support Scientific Argumentation 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a As large language models (LLMs) are increasingly used to support learning, there is a growing need for a principled framework to guide the design of LLM-based tools and resources that are pedagogically effective and contextually responsive. This study proposes a framework by examining how prompt engineering can enhance the quality of chatbot responses to support middle school students’ scientific reasoning and argumentation. Drawing on learning theories and established frameworks for scientific argumentation, we employed a design-based research approach to iteratively refine system prompts and evaluate LLM-generated responses across diverse student input scenarios. Our analysis highlights how different prompt configurations affect the relevance and explanatory depth of chatbot feedback. We report findings from the iterative refinement process, along with an analysis of the quality of responses generated by each version of the chatbot. The outcomes indicate how different prompt configurations influence the coherence, relevance, and explanatory processes of LLM responses. The study contributes a set of critical design principles for developing theory-aligned prompts that enable LLM-based chatbots to meaningfully support students in constructing and revising scientific arguments. These principles offer broader implications for designing LLM applications across varied educational domains. 
653 |a Pedagogy 
653 |a Students 
653 |a Formative evaluation 
653 |a Learning activities 
653 |a Automation 
653 |a Feedback 
653 |a Generative artificial intelligence 
653 |a Teachers 
653 |a Chatbots 
653 |a Tutoring 
653 |a Classrooms 
653 |a Design 
653 |a Natural language processing 
653 |a Middle schools 
653 |a Large language models 
653 |a Literature Reviews 
653 |a Intelligent Tutoring Systems 
653 |a Learning Processes 
653 |a Learning Theories 
653 |a Cooperative Learning 
653 |a Simulation 
653 |a Feedback (Response) 
653 |a Pedagogical Content Knowledge 
653 |a Artificial Intelligence 
653 |a Evidence Based Practice 
653 |a Science Instruction 
653 |a Educational Assessment 
653 |a Opportunities 
653 |a Language Processing 
653 |a Engineering Education 
653 |a Learner Engagement 
653 |a Scientific Principles 
653 |a Educational Facilities Improvement 
653 |a Educational Strategies 
653 |a Class Activities 
700 1 |a Liu, Lei  |u Educational Testing Service (ETS), 660 Rosedale Road, Princeton, NJ 08541, USA 
700 1 |a Ober, Teresa M  |u Educational Testing Service (ETS), 660 Rosedale Road, Princeton, NJ 08541, USA 
700 1 |a Song, Yi  |u Educational Testing Service (ETS), 660 Rosedale Road, Princeton, NJ 08541, USA 
700 1 |a Jusino-Del Valle Euvelisse  |u Educational Testing Service (ETS), 660 Rosedale Road, Princeton, NJ 08541, USA 
700 1 |a Zhai Xiaoming  |u AI4STEM Education Center, University of Georgia, Athens, GA 30602, USA 
700 1 |a Wang, Yun  |u School of Computing, University of Georgia, Athens, GA 30602, USA; yw83522@uga.edu (Y.W.); 
700 1 |a Liu Ninghao  |u School of Computing, University of Georgia, Athens, GA 30602, USA; yw83522@uga.edu (Y.W.); 
773 0 |t Education Sciences  |g vol. 15, no. 11 (2025), p. 1507-1531 
786 0 |d ProQuest  |t Education Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3275510607/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3275510607/fulltextwithgraphics/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3275510607/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch