Explainable Machine Learning for Mental Health Detection Using NLP

Guardat en:
Dades bibliogràfiques
Publicat a:ADCAIJ : Advances in Distributed Computing and Artificial Intelligence Journal vol. 14 (2025), p. e32449-e32466
Autor principal: Noor Ul Ain Mushtaq
Altres autors: Narejo, Sanam, Syed Amjad Ali, Muhammad Moazzam Jawaid
Publicat:
Ediciones Universidad de Salamanca
Matèries:
Accés en línia:Citation/Abstract
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!
Descripció
Resum:Humans' mental conditions are often revealed through their social media activity, facilitated by the anonymity of the internet. Early detection of psy- chiatric issues through these activities can lead to timely interventions, po- tentially preventing severe mental health disorders such as depression and anxiety. However, the complexity of state-of-the-art machine learning (ML) models has led to challenges in interpretability, often resulting in these models being viewed as «black boxes». This paper provides a comprehensive analysis of explainable AI (XAI) within the framework of Natural Language Processing (NLP) and ML. Thus, NLP techniques improve the performance of learning-based methods by incorporating the semantic and syntactic features of the text. The application of ML in healthcare is gaining traction, particularly in extracting novel scientific insights from observational or simulated data. Domain knowledge is crucial for achieving scientific consistency and explainability. In our study, we implemented Naïve Bayes and Random Forest algorithms, achieving accuracies of 92 % and 99 %, respectively. To further explore transparency, interpretability, and explainability, we applied explainable ML techniques, with LIME emerging as a popular tool. Our findings underscore the importance of integrating XAI methods to better understand and interpret the decisions made by complex ML models.
ISSN:2255-2863
DOI:10.14201/adcaij.32449
Font:Advanced Technologies & Aerospace Database