Improving Human Action Recognition in Videos with CNN– sLSTM and Soft Attention Mechanism

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Electrical Systems vol. 21, no. 1 (2025), p. 122-138
1. Verfasser: Khaled, Merit
Weitere Verfasser: Mohammed, Beladgham
Veröffentlicht:
Engineering and Scientific Research Groups
Schlagworte:
Online-Zugang:Citation/Abstract
Full Text - PDF
Tags: Tag hinzufügen
Keine Tags, Fügen Sie das erste Tag hinzu!
Beschreibung
Abstract:Action recognition in videos has become crucial in computer vision because of its diverse applications, such as multimedia indexing and surveillance in public environments. The incorporation of attention mechanisms into deep learning has gained considerable attention. This approach aims to emulate the human visual processing system by enabling models to focus on pertinent aspects of a scene and derive significant insights. This study introduces an advanced soft attention mechanism designed to enhance the CNN-sLSTM architecture for recognizing human actions in videos. We used the VGG19 convolutional neural network to extract spatial features from the video frames, whereas the sLSTM network models the temporal relationships between frames. The performance of our model was assessed using two widely used datasets, HMDB-51 and UCF-101, with precision as the key evaluation metric. Our results indicate substantial improvements, achieving accuracy scores of 53.12% (base approach) and 67.18% (with attention) for HMDB-51 and 83.98% (base approach) and 94.15% (with attention) for UCF-101. These results underscore the effectiveness of the proposed soft attention mechanism in improving the performance of video action recognition models.
ISSN:1112-5209
Quelle:Engineering Database