Development of an ensemble CNN model with explainable AI for the classification of gastrointestinal cancer

Guardat en:
Dades bibliogràfiques
Publicat a:PLoS One vol. 19, no. 6 (Jun 2024), p. e0305628
Autor principal: Auzine, Muhammad Muzzammil
Altres autors: Khan, Maleika Heenaye-Mamode, Baichoo, Sunilduth, Nuzhah Gooda Sahib, Bissoonauth-Daiboo, Preeti, Gao, Xiaohong, Heetun, Zaid
Publicat:
Public Library of Science
Matèries:
Accés en línia:Citation/Abstract
Full Text
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3072226141
003 UK-CbPIL
022 |a 1932-6203 
024 7 |a 10.1371/journal.pone.0305628  |2 doi 
035 |a 3072226141 
045 2 |b d20240601  |b d20240630 
084 |a 174835  |2 nlm 
100 1 |a Auzine, Muhammad Muzzammil 
245 1 |a Development of an ensemble CNN model with explainable AI for the classification of gastrointestinal cancer 
260 |b Public Library of Science  |c Jun 2024 
513 |a Journal Article 
520 3 |a The implementation of AI assisted cancer detection systems in clinical environments has faced numerous hurdles, mainly because of the restricted explainability of their elemental mechanisms, even though such detection systems have proven to be highly effective. Medical practitioners are skeptical about adopting AI assisted diagnoses as due to the latter’s inability to be transparent about decision making processes. In this respect, explainable artificial intelligence (XAI) has emerged to provide explanations for model predictions, thereby overcoming the computational black box problem associated with AI systems. In this particular research, the focal point has been the exploration of the Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) approaches which enable model prediction explanations. This study used an ensemble model consisting of three convolutional neural networks(CNN): InceptionV3, InceptionResNetV2 and VGG16, which was based on averaging techniques and by combining their respective predictions. These models were trained on the Kvasir dataset, which consists of pathological findings related to gastrointestinal cancer. An accuracy of 96.89% and F1-scores of 96.877% were attained by our ensemble model. Following the training of the ensemble model, we employed SHAP and LIME to analyze images from the three classes, aiming to provide explanations regarding the deterministic features influencing the model’s predictions. The results obtained from this analysis demonstrated a positive and encouraging advancement in the exploration of XAI approaches, specifically in the context of gastrointestinal cancer detection within the healthcare domain. 
653 |a Accuracy 
653 |a Artificial intelligence 
653 |a Deep learning 
653 |a Gastrointestinal cancer 
653 |a Artificial neural networks 
653 |a Neural networks 
653 |a Esophageal cancer 
653 |a Colorectal cancer 
653 |a Explainable artificial intelligence 
653 |a Cancer 
653 |a Business metrics 
653 |a Machine learning 
653 |a Medical prognosis 
653 |a Polyps 
653 |a Colonoscopy 
653 |a Predictions 
653 |a Ulcers 
653 |a Decision making 
653 |a Classification 
653 |a System effectiveness 
653 |a Endoscopy 
653 |a Medical diagnosis 
653 |a Gastric cancer 
653 |a Social 
700 1 |a Khan, Maleika Heenaye-Mamode 
700 1 |a Baichoo, Sunilduth 
700 1 |a Nuzhah Gooda Sahib 
700 1 |a Bissoonauth-Daiboo, Preeti 
700 1 |a Gao, Xiaohong 
700 1 |a Heetun, Zaid 
773 0 |t PLoS One  |g vol. 19, no. 6 (Jun 2024), p. e0305628 
786 0 |d ProQuest  |t Health & Medical Collection 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3072226141/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3072226141/fulltext/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3072226141/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch