Audiovisual Brain Activity Recognition Based on Symmetric Spatio-Temporal–Frequency Feature Association Vectors

-д хадгалсан:
Номзүйн дэлгэрэнгүй
-д хэвлэсэн:Symmetry vol. 17, no. 12 (2025), p. 2175-2200
Үндсэн зохиолч: Yang, Xi
Бусад зохиолчид: Zhang, Lu, Wu Chenxue, Shi Bingjie, Li Cunzhen
Хэвлэсэн:
MDPI AG
Нөхцлүүд:
Онлайн хандалт:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Шошгууд: Шошго нэмэх
Шошго байхгүй, Энэхүү баримтыг шошголох эхний хүн болох!

MARC

LEADER 00000nab a2200000uu 4500
001 3286358207
003 UK-CbPIL
022 |a 2073-8994 
024 7 |a 10.3390/sym17122175  |2 doi 
035 |a 3286358207 
045 2 |b d20250101  |b d20251231 
084 |a 231635  |2 nlm 
100 1 |a Yang, Xi 
245 1 |a Audiovisual Brain Activity Recognition Based on Symmetric Spatio-Temporal–Frequency Feature Association Vectors 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a The neural mechanisms of auditory and visual processing are not only a core research focus in cognitive neuroscience but also hold critical importance for the development of brain–computer interfaces, neurological disease diagnosis, and human–computer interaction technologies. However, EEG-based studies on classifying auditory and visual brain activities largely overlook the in-depth utilization of spatial distribution patterns and frequency-specific characteristics inherent in such activities. This paper proposes an analytical framework that constructs symmetrical spatio-temporal–frequency feature association vectors to represent brain activities by computing EEG microstates across multiple frequency bands and brain functional connectivity networks. Then we construct an Adaptive Tensor Fusion Network (ATFN) that leverages feature association vectors to recognize brain activities related to auditory, visual, and audiovisual processing. The ATFN includes a feature fusion and selection module based on differential feature enhancement, a feature encoding module enhanced with attention mechanisms, and a classifier based on a multilayer perceptron to achieve the efficient recognition of audiovisual brain activities. The feature association vectors are then processed by the Adaptive Tensor Fusion Network (ATFN) to efficiently recognize different types of audiovisual brain activities. The results show that the classification accuracy for auditory, visual, and audiovisual brain activity reaches 96.97% using the ATFN, demonstrating that the proposed symmetric spatio-temporal–frequency feature association vectors effectively characterize visual, auditory, and audiovisual brain activities. The symmetrical spatio-temporal–frequency feature association vectors establish a computable mapping that captures the intrinsic correlations among temporal, spatial, and frequency features, offering a more interpretable method to represent brain activities. The proposed ATFN provides an effective recognition framework for brain activity, with a potential application for brain–computer interfaces and neurological disease diagnosis. 
653 |a Activity recognition 
653 |a Accuracy 
653 |a Neurological diseases 
653 |a Classification 
653 |a Spatial distribution 
653 |a Human-computer interface 
653 |a Neurological disorders 
653 |a Brain research 
653 |a Multilayer perceptrons 
653 |a Neural networks 
653 |a Tensors 
653 |a Frequencies 
653 |a Diagnosis 
653 |a Information processing 
653 |a Electroencephalography 
653 |a Modules 
700 1 |a Zhang, Lu 
700 1 |a Wu Chenxue 
700 1 |a Shi Bingjie 
700 1 |a Li Cunzhen 
773 0 |t Symmetry  |g vol. 17, no. 12 (2025), p. 2175-2200 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3286358207/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3286358207/fulltextwithgraphics/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3286358207/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch