Applications of Large Language Models in the Field of Suicide Prevention: Scoping Review

שמור ב:
מידע ביבליוגרפי
הוצא לאור ב:Journal of Medical Internet Research vol. 27 (2025), p. e63126
מחבר ראשי: Holmes, Glenn
מחברים אחרים: Tang, Biya, Gupta, Sunil, Venkatesh, Svetha, Christensen, Helen, Whitton, Alexis
יצא לאור:
Gunther Eysenbach MD MPH, Associate Professor
נושאים:
גישה מקוונת:Citation/Abstract
Full Text + Graphics
Full Text - PDF
תגים: הוספת תג
אין תגיות, היה/י הראשונ/ה לתייג את הרשומה!

MARC

LEADER 00000nab a2200000uu 4500
001 3222367928
003 UK-CbPIL
022 |a 1438-8871 
024 7 |a 10.2196/63126  |2 doi 
035 |a 3222367928 
045 2 |b d20250101  |b d20251231 
100 1 |a Holmes, Glenn 
245 1 |a Applications of Large Language Models in the Field of Suicide Prevention: Scoping Review 
260 |b Gunther Eysenbach MD MPH, Associate Professor  |c 2025 
513 |a Journal Article 
520 3 |a Background:Prevention of suicide is a global health priority. Approximately 800,000 individuals die by suicide yearly, and for every suicide death, there are another 20 estimated suicide attempts. Large language models (LLMs) hold the potential to enhance scalable, accessible, and affordable digital services for suicide prevention and self-harm interventions. However, their use also raises clinical and ethical questions that require careful consideration.Objective:This scoping review aims to identify emergent trends in LLM applications in the field of suicide prevention and self-harm research. In addition, it summarizes key clinical and ethical considerations relevant to this nascent area of research.Methods:Searches were conducted in 4 databases (PsycINFO, Embase, PubMed, and IEEE Xplore) in February 2024. Eligible studies described the application of LLMs for suicide or self-harm prevention, detection, or management. English-language peer-reviewed articles and conference proceedings were included, without date restrictions. Narrative synthesis was used to synthesize study characteristics, objectives, models, data sources, proposed clinical applications, and ethical considerations. This review adhered to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) standards.Results:Of the 533 studies identified, 36 (6.8%) met the inclusion criteria. An additional 7 studies were identified through citation chaining, resulting in 43 studies for review. The studies showed a bifurcation of publication fields, with varying publication norms between computer science and mental health. While most of the studies (33/43, 77%) focused on identifying suicide risk, newer applications leveraging generative functions (eg, support, education, and training) are emerging. Social media was the most common source of LLM training data. Bidirectional Encoder Representations from Transformers (BERT) was the predominant model used, although generative pretrained transformers (GPTs) featured prominently in generative applications. Clinical LLM applications were reported in 60% (26/43) of the studies, often for suicide risk detection or as clinical assistance tools. Ethical considerations were reported in 33% (14/43) of the studies, with privacy, confidentiality, and consent strongly represented.Conclusions:This evolving research area, bridging computer science and mental health, demands a multidisciplinary approach. While open access models and datasets will likely shape the field of suicide prevention, documenting their limitations and potential biases is crucial. High-quality training data are essential for refining these models and mitigating unwanted biases. Policies that address ethical concerns—particularly those related to privacy and security when using social media data—are imperative. Limitations include high variability across disciplines in how LLMs and study methodology are reported. The emergence of generative artificial intelligence signals a shift in approach, particularly in applications related to care, support, and education, such as improved crisis care and gatekeeper training methods, clinician copilot models, and improved educational practices. Ongoing human oversight—through human-in-the-loop testing or expert external validation—is essential for responsible development and use.Trial Registration:OSF Registries osf.io/nckq7; https://osf.io/nckq7 
653 |a Language 
653 |a Computer science 
653 |a Databases 
653 |a Prevention programs 
653 |a Application 
653 |a Trends 
653 |a Clinical standards 
653 |a Social networks 
653 |a Self destructive behavior 
653 |a Suicide 
653 |a Automation 
653 |a Ethics 
653 |a Bidirectionality 
653 |a Chatbots 
653 |a Self injury 
653 |a Artificial intelligence 
653 |a Machine learning 
653 |a Systematic review 
653 |a Social media 
653 |a Confidentiality 
653 |a Bias 
653 |a Natural language processing 
653 |a Linguistics 
653 |a Interdisciplinary aspects 
653 |a Mental health 
653 |a Privacy 
653 |a Large language models 
653 |a Text messaging 
653 |a Suicide prevention 
653 |a Social education 
653 |a Meta-analysis 
653 |a Models 
653 |a Data quality 
653 |a Registration 
653 |a Social security 
653 |a Prevention 
653 |a Mass media 
653 |a Mental health services 
653 |a Training 
653 |a Computer assisted research 
653 |a English language 
653 |a Education 
653 |a Language modeling 
653 |a Research 
700 1 |a Tang, Biya 
700 1 |a Gupta, Sunil 
700 1 |a Venkatesh, Svetha 
700 1 |a Christensen, Helen 
700 1 |a Whitton, Alexis 
773 0 |t Journal of Medical Internet Research  |g vol. 27 (2025), p. e63126 
786 0 |d ProQuest  |t Library Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3222367928/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3222367928/fulltextwithgraphics/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3222367928/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch