AI or nay? Evaluating the potential use of ChatGPT (Open AI) and Perplexity AI in undergraduate nursing research: An exploratory case study
Guardado en:
| Publicado en: | Nurse Education in Practice vol. 87 (Aug 2025), p. 104488-104499 |
|---|---|
| Autor principal: | |
| Otros Autores: | , , , , , |
| Publicado: |
Elsevier Limited
|
| Materias: | |
| Acceso en línea: | Citation/Abstract Full Text Full Text - PDF |
| Etiquetas: |
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
| Resumen: | Aims This study aimed to evaluate the performance of publicly available large language models (LLMs), ChatGPT-4o, ChatGPT-4o Mini and Perplexity AI, in responding to research-related questions at the undergraduate nursing level. The evaluation was conducted across different platforms and prompt structures. The research questions were categorized according to Bloom’s taxonomy, to compare the quality of AI-generated responses across cognitive levels. Additionally, the study explored the perspectives of research members on using AI tools in teaching foundational research concepts to undergraduate nursing students. Background Large Language Models (LLMs) could help nursing students learn foundational research concepts but their performance in answering research-related questions has not been explored. Design An exploratory case study was conducted to evaluate the performance of ChatGPT-4o, ChatGPT-4o Mini and Perplexity AI in answering 41 research-related questions. Methods Three different prompts (Prompt-1: Unstructured with no context; Prompt-2: Structured from professor’s perspective; Prompt-3: Structured from student’s perspective) were tested. A 5-point Likert-type valid author-developed scale was used to assess all AI-generated responses across six domains: Accuracy, Relevance, Clarity & Structure, Examples Provided, Critical Thinking and Referencing. Results All three AI models generated higher-quality responses when structured prompts were used compared with unstructured prompts and responded well across the different Bloom’s taxonomy levels. ChatGPT-4o and ChatGPT-4o Mini performed better at answering research-related questions than Perplexity AI. Conclusion AI models hold promise as supplementary tools for enhancing undergraduate nursing students’ understanding of foundational research concepts. Further studies are warranted to evaluate their impact on specific research-related learning outcomes within nursing education. |
|---|---|
| ISSN: | 1471-5953 1873-5223 |
| DOI: | 10.1016/j.nepr.2025.104488 |
| Fuente: | Sociology Database |