Explainable Machine Learning for Mental Health Detection Using NLP

Uloženo v:
Podrobná bibliografie
Vydáno v:ADCAIJ : Advances in Distributed Computing and Artificial Intelligence Journal vol. 14 (2025), p. e32449-e32466
Hlavní autor: Noor Ul Ain Mushtaq
Další autoři: Narejo, Sanam, Syed Amjad Ali, Muhammad Moazzam Jawaid
Vydáno:
Ediciones Universidad de Salamanca
Témata:
On-line přístup:Citation/Abstract
Full Text - PDF
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Abstrakt:Humans' mental conditions are often revealed through their social media activity, facilitated by the anonymity of the internet. Early detection of psy- chiatric issues through these activities can lead to timely interventions, po- tentially preventing severe mental health disorders such as depression and anxiety. However, the complexity of state-of-the-art machine learning (ML) models has led to challenges in interpretability, often resulting in these models being viewed as «black boxes». This paper provides a comprehensive analysis of explainable AI (XAI) within the framework of Natural Language Processing (NLP) and ML. Thus, NLP techniques improve the performance of learning-based methods by incorporating the semantic and syntactic features of the text. The application of ML in healthcare is gaining traction, particularly in extracting novel scientific insights from observational or simulated data. Domain knowledge is crucial for achieving scientific consistency and explainability. In our study, we implemented Naïve Bayes and Random Forest algorithms, achieving accuracies of 92 % and 99 %, respectively. To further explore transparency, interpretability, and explainability, we applied explainable ML techniques, with LIME emerging as a popular tool. Our findings underscore the importance of integrating XAI methods to better understand and interpret the decisions made by complex ML models.
ISSN:2255-2863
DOI:10.14201/adcaij.32449
Zdroj:Advanced Technologies & Aerospace Database