Advancing AI Interpretability in Medical Imaging: A Comparative Analysis of Pixel-Level Interpretability and Grad-CAM Models

Gardado en:
Detalles Bibliográficos
Publicado en:Machine Learning and Knowledge Extraction vol. 7, no. 1 (2025), p. 12
Autor Principal: Ennab, Mohammad
Outros autores: Mcheick, Hamid
Publicado:
MDPI AG
Materias:
Acceso en liña:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Engadir etiqueta
Sen Etiquetas, Sexa o primeiro en etiquetar este rexistro!

MARC

LEADER 00000nab a2200000uu 4500
001 3181640284
003 UK-CbPIL
022 |a 2504-4990 
024 7 |a 10.3390/make7010012  |2 doi 
035 |a 3181640284 
045 2 |b d20250101  |b d20250331 
100 1 |a Ennab, Mohammad 
245 1 |a Advancing AI Interpretability in Medical Imaging: A Comparative Analysis of Pixel-Level Interpretability and Grad-CAM Models 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a This study introduces the Pixel-Level Interpretability (PLI) model, a novel framework designed to address critical limitations in medical imaging diagnostics by enhancing model transparency and diagnostic accuracy. The primary objective is to evaluate PLI’s performance against Gradient-Weighted Class Activation Mapping (Grad-CAM) and achieve fine-grained interpretability and improved localization precision. The methodology leverages the VGG19 convolutional neural network architecture and utilizes three publicly available COVID-19 chest radiograph datasets, consisting of over 1000 labeled images, which were preprocessed through resizing, normalization, and augmentation to ensure robustness and generalizability. The experiments focused on key performance metrics, including interpretability, structural similarity (SSIM), diagnostic precision, mean squared error (MSE), and computational efficiency. The results demonstrate that PLI significantly outperforms Grad-CAM in all measured dimensions. PLI produced detailed pixel-level heatmaps with higher SSIM scores, reduced MSE, and faster inference times, showcasing its ability to provide granular insights into localized diagnostic features while maintaining computational efficiency. In contrast, Grad-CAM’s explanations often lack the granularity required for clinical reliability. By integrating fuzzy logic to enhance visual and numerical explanations, PLI can deliver interpretable outputs that align with clinical expectations, enabling practitioners to make informed decisions with higher confidence. This work establishes PLI as a robust tool for bridging gaps in AI model transparency and clinical usability. By addressing the challenges of interpretability and accuracy simultaneously, PLI contributes to advancing the integration of AI in healthcare and sets a foundation for broader applications in other high-stake domains. 
653 |a Accuracy 
653 |a Medical electronics 
653 |a Machine learning 
653 |a Performance measurement 
653 |a Pixels 
653 |a Deep learning 
653 |a Performance evaluation 
653 |a Artificial intelligence 
653 |a Fuzzy logic 
653 |a Artificial neural networks 
653 |a Neural networks 
653 |a Medical imaging 
653 |a Computational efficiency 
653 |a Clinical decision making 
653 |a Literature reviews 
700 1 |a Mcheick, Hamid 
773 0 |t Machine Learning and Knowledge Extraction  |g vol. 7, no. 1 (2025), p. 12 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3181640284/abstract/embedded/09EF48XIB41FVQI7?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3181640284/fulltextwithgraphics/embedded/09EF48XIB41FVQI7?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3181640284/fulltextPDF/embedded/09EF48XIB41FVQI7?source=fedsrch