MARC

LEADER 00000nab a2200000uu 4500
001 3268438189
003 UK-CbPIL
022 |a 1472-6920 
024 7 |a 10.1186/s12909-025-07872-7  |2 doi 
035 |a 3268438189 
045 2 |b d20250101  |b d20251231 
084 |a 58506  |2 nlm 
100 1 |a Chen, Guihua 
245 1 |a Virtual case reasoning and AI-assisted diagnostic instruction: an empirical study based on body interact and large language models 
260 |b Springer Nature B.V.  |c 2025 
513 |a Journal Article 
520 3 |a BackgroundIntegrating large language models (LLMs) with virtual patient platforms offers a novel approach to teaching clinical reasoning. This study evaluated the performance and educational value of combining Body Interact with two AI models, ChatGPT-4 and DeepSeek-R1, across acute care scenarios.MethodsThree standardized cases (coma, stroke, trauma) were simulated by two medical researchers. Structured case summaries were input into both models using identical prompts. Outputs were assessed for diagnostic and treatment consistency, alignment with clinical reasoning stages, and educational quality using expert scoring, AI self-assessment, text readability indices, and Grammarly analysis.ResultsChatGPT-4 performed best in stroke scenarios but was less consistent in coma and trauma cases. DeepSeek-R1 showed more stable diagnostic and therapeutic output across all cases. While both models received high expert and self-assessment scores, ChatGPT-4 produced more readable outputs, and DeepSeek-R1 demonstrated greater grammatical precision.ConclusionsOur findings suggest that ChatGPT-4 and DeepSeek-R1 each offer unique strengths for AI-assisted instruction. ChatGPT-4’s accessible language may better support early learners, whereas DeepSeek-R1 may be more aligned with formal clinical reasoning. Selecting models based on specific teaching goals can enhance the effectiveness of AI-driven medical education. 
610 4 |a Hangzhou DeepSeek Artificial Intelligence Co Ltd OpenAI 
653 |a Language 
653 |a Emergency medical care 
653 |a Accuracy 
653 |a Medical education 
653 |a Interdisciplinary aspects 
653 |a Metabolism 
653 |a Chatbots 
653 |a Stroke 
653 |a Reflective teaching 
653 |a Coma 
653 |a Artificial intelligence 
653 |a Trauma 
653 |a Clinical decision making 
653 |a Illnesses 
653 |a Consciousness 
653 |a Bilingualism 
653 |a Large language models 
653 |a Fainting 
653 |a Computer Simulation 
653 |a Educational Quality 
653 |a Physicians 
653 |a Patients 
653 |a Neurology 
653 |a Relevance (Education) 
653 |a Educational Research 
653 |a Diagnostic Tests 
653 |a Measurement Techniques 
653 |a Individualized Instruction 
653 |a Program Evaluation 
653 |a Decision Making 
653 |a Medical Evaluation 
653 |a Diagnostic Teaching 
653 |a Evaluative Thinking 
653 |a Educational Assessment 
653 |a Comparative Education 
653 |a Comparative Analysis 
653 |a Physical Examinations 
653 |a Interdisciplinary Approach 
653 |a Learner Engagement 
700 1 |a Lin, Chuan 
700 1 |a Zhang, Lijie 
700 1 |a Luo, Zhao 
700 1 |a Shin, Yu Seob 
700 1 |a Li, Xianxin 
773 0 |t BMC Medical Education  |g vol. 25 (2025), p. 1-17 
786 0 |d ProQuest  |t Healthcare Administration Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3268438189/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3268438189/fulltext/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3268438189/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch