The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound

Gorde:
Xehetasun bibliografikoak
Argitaratua izan da:Bioengineering vol. 12, no. 8 (2025), p. 855-889
Egile nagusia: Blake, VanBerlo
Beste egile batzuk: Hoey, Jesse, Wong, Alexander, Arntfield, Robert
Argitaratua:
MDPI AG
Gaiak:
Sarrera elektronikoa:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiketak: Etiketa erantsi
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!
Deskribapena
Laburpena:Data augmentation is a central component of joint embedding self-supervised learning (SSL). Approaches that work for natural images may not always be effective in medical imaging tasks. This study systematically investigated the impact of data augmentation and preprocessing strategies in SSL for lung ultrasound. Three data augmentation pipelines were assessed: (1) a baseline pipeline commonly used across imaging domains, (2) a novel semantic-preserving pipeline designed for ultrasound, and (3) a distilled set of the most effective transformations from both pipelines. Pretrained models were evaluated on multiple classification tasks: B-line detection, pleural effusion detection, and COVID-19 classification. Experiments revealed that semantics-preserving data augmentation resulted in the greatest performance for COVID-19 classification—a diagnostic task requiring global image context. Cropping-based methods yielded the greatest performance on the B-line and pleural effusion object classification tasks, which require strong local pattern recognition. Lastly, semantics-preserving ultrasound image preprocessing resulted in increased downstream performance for multiple tasks. Guidance regarding data augmentation and preprocessing strategies was synthesized for developers working with SSL in ultrasound.
ISSN:2306-5354
DOI:10.3390/bioengineering12080855
Baliabidea:Engineering Database