AI or nay? Evaluating the potential use of ChatGPT (Open AI) and Perplexity AI in undergraduate nursing research: An exploratory case study

Na minha lista:
Detalhes bibliográficos
Publicado no:Nurse Education in Practice vol. 87 (Aug 2025), p. 104488-104499
Autor principal: Ng, Jamie Qiao Xin
Outros Autores: Chua, Joelle Yan Xin, Choolani, Mahesh, Li, Sarah W.L., Foo, Lin, Pereira, Travis Lanz-Brian, Shorey, Shefaly
Publicado em:
Elsevier Limited
Assuntos:
Acesso em linha:Citation/Abstract
Full Text
Full Text - PDF
Tags: Adicionar Tag
Sem tags, seja o primeiro a adicionar uma tag!

MARC

LEADER 00000nab a2200000uu 4500
001 3244814752
003 UK-CbPIL
022 |a 1471-5953 
022 |a 1873-5223 
024 7 |a 10.1016/j.nepr.2025.104488  |2 doi 
035 |a 3244814752 
045 2 |b d20250801  |b d20250831 
084 |a 170342  |2 nlm 
100 1 |a Ng, Jamie Qiao Xin  |u Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 
245 1 |a AI or nay? Evaluating the potential use of ChatGPT (Open AI) and Perplexity AI in undergraduate nursing research: An exploratory case study 
260 |b Elsevier Limited  |c Aug 2025 
513 |a Journal Article 
520 3 |a Aims This study aimed to evaluate the performance of publicly available large language models (LLMs), ChatGPT-4o, ChatGPT-4o Mini and Perplexity AI, in responding to research-related questions at the undergraduate nursing level. The evaluation was conducted across different platforms and prompt structures. The research questions were categorized according to Bloom’s taxonomy, to compare the quality of AI-generated responses across cognitive levels. Additionally, the study explored the perspectives of research members on using AI tools in teaching foundational research concepts to undergraduate nursing students. Background Large Language Models (LLMs) could help nursing students learn foundational research concepts but their performance in answering research-related questions has not been explored. Design An exploratory case study was conducted to evaluate the performance of ChatGPT-4o, ChatGPT-4o Mini and Perplexity AI in answering 41 research-related questions. Methods Three different prompts (Prompt-1: Unstructured with no context; Prompt-2: Structured from professor’s perspective; Prompt-3: Structured from student’s perspective) were tested. A 5-point Likert-type valid author-developed scale was used to assess all AI-generated responses across six domains: Accuracy, Relevance, Clarity & Structure, Examples Provided, Critical Thinking and Referencing. Results All three AI models generated higher-quality responses when structured prompts were used compared with unstructured prompts and responded well across the different Bloom’s taxonomy levels. ChatGPT-4o and ChatGPT-4o Mini performed better at answering research-related questions than Perplexity AI. Conclusion AI models hold promise as supplementary tools for enhancing undergraduate nursing students’ understanding of foundational research concepts. Further studies are warranted to evaluate their impact on specific research-related learning outcomes within nursing education. 
610 4 |a OpenAI 
653 |a Students 
653 |a Blooms taxonomy 
653 |a Evidence-based practice 
653 |a Cognitive ability 
653 |a Performance evaluation 
653 |a Chatbots 
653 |a Cognition & reasoning 
653 |a Medical education 
653 |a Validity 
653 |a Classification 
653 |a Hypotheses 
653 |a Teaching 
653 |a Critical thinking 
653 |a Learning 
653 |a Nursing education 
653 |a Nursing 
653 |a Answers 
653 |a Case studies 
653 |a Skills 
653 |a Research methodology 
653 |a Artificial intelligence 
653 |a Evidence-based nursing 
653 |a Large language models 
653 |a Literacy 
653 |a College students 
653 |a Research 
653 |a Student attitudes 
653 |a Questions 
653 |a Human-computer interaction 
653 |a Nurses 
653 |a Academic achievement 
653 |a Concepts 
653 |a Learning outcomes 
653 |a Language modeling 
653 |a Researchers 
653 |a Nursing Research 
653 |a Taxonomy 
653 |a Thinking Skills 
653 |a Evidence Based Practice 
653 |a Evaluative Thinking 
653 |a Content Validity 
653 |a Undergraduate Students 
653 |a Research Skills 
653 |a Outcomes of Treatment 
653 |a Interrater Reliability 
653 |a Learning Experience 
653 |a Nursing Students 
653 |a Reference Materials 
653 |a Feedback (Response) 
653 |a Outcomes of Education 
700 1 |a Chua, Joelle Yan Xin  |u Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 
700 1 |a Choolani, Mahesh  |u Department of Obstetrics and Gynaecology, National University Hospital, Singapore 
700 1 |a Li, Sarah W.L.  |u Department of Obstetrics and Gynaecology, National University Hospital, Singapore 
700 1 |a Foo, Lin  |u Institute of Reproductive and Developmental Biology, Imperial College London, United Kingdom 
700 1 |a Pereira, Travis Lanz-Brian  |u Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 
700 1 |a Shorey, Shefaly  |u Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 
773 0 |t Nurse Education in Practice  |g vol. 87 (Aug 2025), p. 104488-104499 
786 0 |d ProQuest  |t Sociology Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3244814752/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3244814752/fulltext/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3244814752/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch