Efficient Dynamic Emotion Recognition from Facial Expressions Using Statistical Spatio-Temporal Geometric Features

Spremljeno u:
Bibliografski detalji
Izdano u:Big Data and Cognitive Computing vol. 9, no. 8 (2025), p. 213-236
Glavni autor: Yaddaden Yacine
Izdano:
MDPI AG
Teme:
Online pristup:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Oznake: Dodaj oznaku
Bez oznaka, Budi prvi tko označuje ovaj zapis!

MARC

LEADER 00000nab a2200000uu 4500
001 3243980484
003 UK-CbPIL
022 |a 2504-2289 
024 7 |a 10.3390/bdcc9080213  |2 doi 
035 |a 3243980484 
045 2 |b d20250101  |b d20251231 
100 1 |a Yaddaden Yacine 
245 1 |a Efficient Dynamic Emotion Recognition from Facial Expressions Using Statistical Spatio-Temporal Geometric Features 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Automatic Facial Expression Recognition (AFER) is a key component of affective computing, enabling machines to recognize and interpret human emotions across various applications such as human–computer interaction, healthcare, entertainment, and social robotics. Dynamic AFER systems, which exploit image sequences, can capture the temporal evolution of facial expressions but often suffer from high computational costs, limiting their suitability for real-time use. In this paper, we propose an efficient dynamic AFER approach based on a novel spatio-temporal representation. Facial landmarks are extracted, and all possible Euclidean distances are computed to model the spatial structure. To capture temporal variations, three statistical metrics are applied to each distance sequence. A feature selection stage based on the Extremely Randomized Trees (ExtRa-Trees) algorithm is then performed to reduce dimensionality and enhance classification performance. Finally, the emotions are classified using a linear multi-class Support Vector Machine (SVM) and compared against the k-Nearest Neighbors (k-NN) method. The proposed approach is evaluated on three benchmark datasets: CK+, MUG, and MMI, achieving recognition rates of 94.65%, 93.98%, and 75.59%, respectively. Our results demonstrate that the proposed method achieves a strong balance between accuracy and computational efficiency, making it well-suited for real-time facial expression recognition applications. 
610 4 |a CNN 
653 |a Robotics 
653 |a Face recognition 
653 |a Accuracy 
653 |a Datasets 
653 |a Deep learning 
653 |a Wavelet transforms 
653 |a Affective computing 
653 |a Support vector machines 
653 |a Emotion recognition 
653 |a Communication 
653 |a Pattern recognition systems 
653 |a Neural networks 
653 |a Computational efficiency 
653 |a Computing costs 
653 |a Feature selection 
653 |a Emotions 
653 |a Real time 
653 |a Efficiency 
773 0 |t Big Data and Cognitive Computing  |g vol. 9, no. 8 (2025), p. 213-236 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3243980484/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3243980484/fulltextwithgraphics/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3243980484/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch