XIMED: A Dual-Loop Evaluation Framework Integrating Predictive Model and Human-Centered Approaches for Explainable AI in Medical Imaging

Guardado en:
Detalles Bibliográficos
Publicado en:Machine Learning and Knowledge Extraction vol. 7, no. 4 (2025), p. 168-205
Autor principal: Karagoz Gizem
Otros Autores: Tanir, Ozcelebi, Meratnia Nirvana
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:In this study, a structured and methodological evaluation approach for eXplainable Artificial Intelligence (XAI) methods in medical image classification is proposed and implemented using LIME and SHAP explanations for chest X-ray interpretations. The evaluation framework integrates two critical perspectives: predictive model-centered and human-centered evaluations. Predictive model-centered evaluations examine the explanations’ ability to reflect changes in input and output data and the internal model structure. Human-centered evaluations, conducted with 97 medical experts, assess trust, confidence, and agreements with AI’s indicative and contra-indicative reasoning as well as their changes before and after provision of explainability. Key findings of our study include explanation of sensitivity of LIME and SHAP to model changes, their effectiveness in identifying critical features, and SHAP’s significant impact on diagnosis changes. Our results show that both LIME and SHAP negatively affected contra-indicative agreement. Case-based analysis revealed AI explanations reinforce trust and agreement when participant’s initial diagnoses are correct. In these cases, SHAP effectively facilitated correct diagnostic changes. This study establishes a benchmark for future research in XAI for medical image analysis, providing a robust foundation for evaluating and comparing different XAI methods.
ISSN:2504-4990
DOI:10.3390/make7040168
Fuente:Advanced Technologies & Aerospace Database