Improving Human Action Recognition in Videos with CNN– sLSTM and Soft Attention Mechanism

Salvato in:
Dettagli Bibliografici
Pubblicato in:Journal of Electrical Systems vol. 21, no. 1 (2025), p. 122-138
Autore principale: Khaled, Merit
Altri autori: Mohammed, Beladgham
Pubblicazione:
Engineering and Scientific Research Groups
Soggetti:
Accesso online:Citation/Abstract
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!
Descrizione
Abstract:Action recognition in videos has become crucial in computer vision because of its diverse applications, such as multimedia indexing and surveillance in public environments. The incorporation of attention mechanisms into deep learning has gained considerable attention. This approach aims to emulate the human visual processing system by enabling models to focus on pertinent aspects of a scene and derive significant insights. This study introduces an advanced soft attention mechanism designed to enhance the CNN-sLSTM architecture for recognizing human actions in videos. We used the VGG19 convolutional neural network to extract spatial features from the video frames, whereas the sLSTM network models the temporal relationships between frames. The performance of our model was assessed using two widely used datasets, HMDB-51 and UCF-101, with precision as the key evaluation metric. Our results indicate substantial improvements, achieving accuracy scores of 53.12% (base approach) and 67.18% (with attention) for HMDB-51 and 83.98% (base approach) and 94.15% (with attention) for UCF-101. These results underscore the effectiveness of the proposed soft attention mechanism in improving the performance of video action recognition models.
ISSN:1112-5209
Fonte:Engineering Database