SPVINet: A Lightweight Multitask Learning Network for Assisting Visually Impaired People in Multiscene Perception

Guardado en:
Detalles Bibliográficos
Publicado en:IEEE Internet of Things Journal vol. 11, no. 11 (2024), p. 20706
Autor principal: Hong, Kaipeng
Otros Autores: He, Weiqin, Tang, Hui, Zhang, Xing, Li, Qingquan, Zhou, Baoding
Publicado:
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Materias:
Acceso en línea:Citation/Abstract
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Visual perception technology is an important means to facilitate safe navigation for visually impaired people based on Internet of Things (IoT)-enabled camera sensors. However, due to the rapid development of urban traffic systems, traveling outdoors is becoming increasingly complicated. Visually impaired individuals must implement different types of tasks simultaneously, such as finding roads, avoiding obstacles, and viewing traffic lights, which is challenging for both them and navigation assistance methods. To solve these problems, we propose a multitask visual navigation method for visually impaired individuals using an IoT-based camera. A lightweight neural network is designed, which adopts a multitask learning architecture to perform scene classification and path detection tasks simultaneously. We propose two modules, i.e., an enhanced inverted residuals (EIRs) block and a lightweight vision transformer (ViT) block (LWVIT block), to effectively combine the properties of convolutional neural networks (CNNs) and ViT networks. The two modules allow the network to better learn local features and global representations of images while remaining lightweight. The experimental results show that the proposed method can achieve these tasks simultaneously in a lightweight manner, which is important for IoT-based navigation applications. The accuracy of our method in scene classification reaches 91.7%. The path direction and endpoint detection errors are 6.59° and 0.09, respectively, for blind road and 6.81° and 0.06, respectively, for crosswalk. The number of parameters of our method is 0.993 M, which is smaller than that of the comparison methods. An ablation study further demonstrates the effectiveness of the proposed method.
ISSN:2327-4662
DOI:10.1109/JIOT.2024.3371978
Fuente:ABI/INFORM Trade & Industry