Descripción
Resumen:As large language models (LLMs) are increasingly used to support learning, there is a growing need for a principled framework to guide the design of LLM-based tools and resources that are pedagogically effective and contextually responsive. This study proposes a framework by examining how prompt engineering can enhance the quality of chatbot responses to support middle school students’ scientific reasoning and argumentation. Drawing on learning theories and established frameworks for scientific argumentation, we employed a design-based research approach to iteratively refine system prompts and evaluate LLM-generated responses across diverse student input scenarios. Our analysis highlights how different prompt configurations affect the relevance and explanatory depth of chatbot feedback. We report findings from the iterative refinement process, along with an analysis of the quality of responses generated by each version of the chatbot. The outcomes indicate how different prompt configurations influence the coherence, relevance, and explanatory processes of LLM responses. The study contributes a set of critical design principles for developing theory-aligned prompts that enable LLM-based chatbots to meaningfully support students in constructing and revising scientific arguments. These principles offer broader implications for designing LLM applications across varied educational domains.
ISSN:2227-7102
2076-3344
DOI:10.3390/educsci15111507
Fuente:Education Database