Explainable Machine Learning for Mental Health Detection Using NLP

Tallennettuna:
Bibliografiset tiedot
Julkaisussa:ADCAIJ : Advances in Distributed Computing and Artificial Intelligence Journal vol. 14 (2025), p. e32449-e32466
Päätekijä: Noor Ul Ain Mushtaq
Muut tekijät: Narejo, Sanam, Syed Amjad Ali, Muhammad Moazzam Jawaid
Julkaistu:
Ediciones Universidad de Salamanca
Aiheet:
Linkit:Citation/Abstract
Full Text - PDF
Tagit: Lisää tagi
Ei tageja, Lisää ensimmäinen tagi!

MARC

LEADER 00000nab a2200000uu 4500
001 3282913661
003 UK-CbPIL
022 |a 2255-2863 
024 7 |a 10.14201/adcaij.32449  |2 doi 
035 |a 3282913661 
045 2 |b d20250101  |b d20251231 
100 1 |a Noor Ul Ain Mushtaq 
245 1 |a Explainable Machine Learning for Mental Health Detection Using NLP 
260 |b Ediciones Universidad de Salamanca  |c 2025 
513 |a Journal Article 
520 3 |a Humans' mental conditions are often revealed through their social media activity, facilitated by the anonymity of the internet. Early detection of psy- chiatric issues through these activities can lead to timely interventions, po- tentially preventing severe mental health disorders such as depression and anxiety. However, the complexity of state-of-the-art machine learning (ML) models has led to challenges in interpretability, often resulting in these models being viewed as «black boxes». This paper provides a comprehensive analysis of explainable AI (XAI) within the framework of Natural Language Processing (NLP) and ML. Thus, NLP techniques improve the performance of learning-based methods by incorporating the semantic and syntactic features of the text. The application of ML in healthcare is gaining traction, particularly in extracting novel scientific insights from observational or simulated data. Domain knowledge is crucial for achieving scientific consistency and explainability. In our study, we implemented Naïve Bayes and Random Forest algorithms, achieving accuracies of 92 % and 99 %, respectively. To further explore transparency, interpretability, and explainability, we applied explainable ML techniques, with LIME emerging as a popular tool. Our findings underscore the importance of integrating XAI methods to better understand and interpret the decisions made by complex ML models. 
653 |a Machine learning 
653 |a Mental health 
653 |a Natural language processing 
653 |a Complexity 
653 |a Explainable artificial intelligence 
653 |a Decision trees 
653 |a Mental disorders 
700 1 |a Narejo, Sanam 
700 1 |a Syed Amjad Ali 
700 1 |a Muhammad Moazzam Jawaid 
773 0 |t ADCAIJ : Advances in Distributed Computing and Artificial Intelligence Journal  |g vol. 14 (2025), p. e32449-e32466 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3282913661/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3282913661/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch