A self-supervised deep learning pipeline for segmentation in two-photon fluorescence microscopy

Guardat en:
Dades bibliogràfiques
Publicat a:bioRxiv (Jan 22, 2025)
Autor principal: Ntiri, Emmanuel Edward
Altres autors: Xu, Tony, Rozak, Matthew W, Attarpour, Ahmadreza, Dorr, Adrienne, Stefanovic, Bojana, Goubran, Maged
Publicat:
Cold Spring Harbor Laboratory Press
Matèries:
Accés en línia:Citation/Abstract
Full text outside of ProQuest
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3158241518
003 UK-CbPIL
022 |a 2692-8205 
024 7 |a 10.1101/2025.01.20.633744  |2 doi 
035 |a 3158241518 
045 0 |b d20250122 
100 1 |a Ntiri, Emmanuel Edward 
245 1 |a A self-supervised deep learning pipeline for segmentation in two-photon fluorescence microscopy 
260 |b Cold Spring Harbor Laboratory Press  |c Jan 22, 2025 
513 |a Working Paper 
520 3 |a Two-photon fluorescence microscopy (TPFM) allows in situ investigation of the structure and function of the brain at a cellular level, but the conventional image analyses of TPFM data are labour-intensive. Automated deep learning (DL)-based image processing pipelines used to analyze TPFM data require large labeled training datasets. Here, we developed a self supervised learning (SSL) pipeline to test whether unlabeled data can be used to boost the accuracy and generalizability of DL models for image segmentation in TPFM. We specifically developed four pretext tasks, including shuffling, rotation, axis rotation, and reconstruction, to train models without supervision using the UNet architecture. We validated our pipeline on two tasks (neuronal soma and vasculature segmentation), using large 3D microscopy datasets. We introduced a novel density-based metric, which provided more sensitive evaluation to downstream analysis tasks. We further determined the amount of labeled data required to reach performance on par with fully supervised learning (FSL) models. SSL-based models that were fine-tuned with only 50% of data were on par or superior (e.g., Dice increase of 3% for neuron segmentation and Dice score of 0.88 +/- 0.09 for vessel segmentation) to FSL models. We demonstrated that segmentation maps generated by SSL models pretrained on the reconstruction and rotation tasks can be better translated to downstream tasks than can other SSL tasks. Finally, we benchmarked all models on a publicly available out-of-distribution dataset, demonstrating that SSL models outperform FSL when trained with clean data, and are more robust than FSL models when trained with noisy data.Competing Interest StatementThe authors have declared no competing interest.Footnotes* https://search.kg.ebrains.eu/instances/bf268b89-1420-476b-b428-b85a913eb523 
653 |a Image processing 
653 |a Fluorescence microscopy 
653 |a Deep learning 
653 |a Structure-function relationships 
653 |a Neuroimaging 
653 |a Functional anatomy 
653 |a Microscopy 
653 |a Sensitivity analysis 
700 1 |a Xu, Tony 
700 1 |a Rozak, Matthew W 
700 1 |a Attarpour, Ahmadreza 
700 1 |a Dorr, Adrienne 
700 1 |a Stefanovic, Bojana 
700 1 |a Goubran, Maged 
773 0 |t bioRxiv  |g (Jan 22, 2025) 
786 0 |d ProQuest  |t Biological Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3158241518/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u https://www.biorxiv.org/content/10.1101/2025.01.20.633744v1