Image-Based Deep Learning for Brain Tumour Transcriptomics: A Benchmark of DeepInsight, Fotomics, and Saliency-Guided CNNs

Shranjeno v:
Bibliografske podrobnosti
izdano v:Machine Learning and Knowledge Extraction vol. 7, no. 4 (2025), p. 119-134
Glavni avtor: Alyatimi Ali
Drugi avtorji: Chung, Vera, Iqbal, Muhammad Atif, Anaissi Ali
Izdano:
MDPI AG
Teme:
Online dostop:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Oznake: Označite
Brez oznak, prvi označite!

MARC

LEADER 00000nab a2200000uu 4500
001 3286316618
003 UK-CbPIL
022 |a 2504-4990 
024 7 |a 10.3390/make7040119  |2 doi 
035 |a 3286316618 
045 2 |b d20251001  |b d20251231 
100 1 |a Alyatimi Ali  |u Faculty of Engineering, School of Computer Science, The University of Sydney, Sydney, NSW 2008, Australia 
245 1 |a Image-Based Deep Learning for Brain Tumour Transcriptomics: A Benchmark of DeepInsight, Fotomics, and Saliency-Guided CNNs 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic classification. DeepInsight utilises dimensionality reduction to spatially arrange gene features, while Fotomics applies Fourier transforms to encode expression patterns into structured images. The proposed method transforms each single-cell gene expression profile into an RGB image using PCA, UMAP, or t-SNE, enabling CNNs such as ResNet to learn spatially organised molecular features. Gradient-based saliency maps are employed to highlight gene regions most influential in model predictions. Evaluation is conducted on two biologically and technologically different datasets: single-cell RNA-seq from glioblastoma GSM3828672 and bulk microarray data from medulloblastoma GSE85217. Outcomes demonstrate that image-based deep learning methods, particularly those incorporating saliency guidance, provide a robust and interpretable framework for uncovering biologically meaningful patterns in complex high-dimensional omics data. For instance, ResNet-18 achieved the highest accuracy of 97.25% on the GSE85217 dataset and 91.02% on GSM3828672, respectively, outperforming other baseline models across multiple metrics. 
653 |a Datasets 
653 |a Accuracy 
653 |a Tumors 
653 |a Gene expression 
653 |a Deep learning 
653 |a Classification 
653 |a Brain cancer 
653 |a Fourier transforms 
653 |a Salience 
653 |a Artificial neural networks 
653 |a Neural networks 
653 |a Glioma 
653 |a Quality control 
653 |a Brain 
653 |a Images 
653 |a Algorithms 
653 |a Machine learning 
653 |a Benchmarks 
700 1 |a Chung, Vera  |u Faculty of Engineering, School of Computer Science, The University of Sydney, Sydney, NSW 2008, Australia 
700 1 |a Iqbal, Muhammad Atif  |u Faculty of Engineering, School of Computer Science, The University of Sydney, Sydney, NSW 2008, Australia 
700 1 |a Anaissi Ali  |u Faculty of Engineering, School of Computer Science, The University of Sydney, Sydney, NSW 2008, Australia 
773 0 |t Machine Learning and Knowledge Extraction  |g vol. 7, no. 4 (2025), p. 119-134 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3286316618/abstract/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3286316618/fulltextwithgraphics/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3286316618/fulltextPDF/embedded/75I98GEZK8WCJMPQ?source=fedsrch