A Unified YOLOv8 Approach for Point-of-Care Diagnostics of Salivary α-Amylase

Guardat en:
Dades bibliogràfiques
Publicat a:Biosensors vol. 15, no. 7 (2025), p. 421-439
Autor principal: Amin Youssef
Altres autors: Cecere, Paola, Pompa, Pier Paolo
Publicat:
MDPI AG
Matèries:
Accés en línia:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3233104053
003 UK-CbPIL
022 |a 2079-6374 
024 7 |a 10.3390/bios15070421  |2 doi 
035 |a 3233104053 
045 2 |b d20250101  |b d20251231 
084 |a 231435  |2 nlm 
100 1 |a Amin Youssef  |u Istituto Italiano di Tecnologia (IIT), Nanobiointeractions & Nanodiagnostics, Via Morego 30, 16163 Genova, Italy; paola.cecere@iit.it 
245 1 |a A Unified YOLOv8 Approach for Point-of-Care Diagnostics of Salivary <i>α</i>-Amylase 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Salivary <inline-formula>α</inline-formula>-amylase (sAA) is a widely recognized biomarker for stress and autonomic nervous system activity. However, conventional enzymatic assays used to quantify sAA are limited by time-consuming, lab-based protocols. In this study, we present a portable, AI-driven point-of-care system for automated sAA classification via colorimetric image analysis. The system integrates SCHEDA, a custom-designed imaging device providing and ensuring standardized illumination, with a deep learning pipeline optimized for mobile deployment. Two classification strategies were compared: (1) a modular YOLOv4-CNN architecture and (2) a unified YOLOv8 segmentation-classification model. The models were trained on a dataset of 1024 images representing an eight-class classification problem corresponding to distinct sAA concentrations. The results show that red-channel input significantly enhances YOLOv4-CNN performance, achieving 93.5% accuracy compared to 88% with full RGB images. The YOLOv8 model further outperformed both approaches, reaching 96.5% accuracy while simplifying the pipeline and enabling real-time, on-device inference. The system was deployed and validated on a smartphone, demonstrating consistent results in live tests. This work highlights a robust, low-cost platform capable of delivering fast, reliable, and scalable salivary diagnostics for mobile health applications. 
653 |a Accuracy 
653 |a Amylases 
653 |a Datasets 
653 |a Deep learning 
653 |a Classification 
653 |a Smartphones 
653 |a Color imagery 
653 |a Artificial neural networks 
653 |a Architecture 
653 |a Image processing 
653 |a Colorimetry 
653 |a Automation 
653 |a Machine learning 
653 |a α-Amylase 
653 |a Point of care testing 
653 |a Efficiency 
653 |a Autonomic nervous system 
653 |a Image analysis 
653 |a Embedded systems 
653 |a Computer vision 
653 |a Lighting 
653 |a Biomarkers 
653 |a Nervous system 
653 |a Object recognition 
653 |a Real time 
653 |a Enzymes 
653 |a Data transmission 
653 |a Environmental 
700 1 |a Cecere, Paola  |u Istituto Italiano di Tecnologia (IIT), Nanobiointeractions &amp;amp; Nanodiagnostics, Via Morego 30, 16163 Genova, Italy; paola.cecere@iit.it 
700 1 |a Pompa, Pier Paolo  |u Istituto Italiano di Tecnologia (IIT), Nanobiointeractions &amp;amp; Nanodiagnostics, Via Morego 30, 16163 Genova, Italy; paola.cecere@iit.it 
773 0 |t Biosensors  |g vol. 15, no. 7 (2025), p. 421-439 
786 0 |d ProQuest  |t Health & Medical Collection 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3233104053/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3233104053/fulltextwithgraphics/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3233104053/fulltextPDF/embedded/ZKJTFFSVAI7CB62C?source=fedsrch