A deep learning framework for gender sensitive speech emotion recognition based on MFCC feature selection and SHAP analysis

Guardado en:
書目詳細資料
發表在:Scientific Reports (Nature Publisher Group) vol. 15, no. 1 (2025), p. 28569-28588
主要作者: Hu, Qingqing
其他作者: Peng, Yiran, Zheng, Zhong
出版:
Nature Publishing Group
主題:
在線閱讀:Citation/Abstract
Full Text
Full Text - PDF
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
Resumen:Speech is one of the most efficient methods of communication among humans, inspiring advancements in machine speech processing under Natural Language Processing (NLP). This field aims to enable computers to analyze, comprehend, and generate human language naturally. Speech processing, as a subset of artificial intelligence, is rapidly expanding due to its applications in emotion recognition, human-computer interaction, and sentiment analysis. This study introduces a novel algorithm for emotion recognition from speech using deep learning techniques. The proposed model achieves up to a 15% improvement compared to state-of-the-art deep learning methods in speech emotion recognition. It employs advanced supervised learning algorithms and deep neural network architectures, including Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. These models are trained on labeled datasets to accurately classify emotions such as happiness, sadness, anger, fear, surprise, and neutrality. The research highlights the system’s real-time application potential, such as analyzing audience emotional responses during live television broadcasts. By leveraging advancements in deep learning, the model achieves high accuracy in understanding and predicting emotional states, offering valuable insights into user behavior. This approach contributes to diverse domains, including media analysis, customer feedback systems, and human-machine interaction, showcasing the transformative potential of combining speech processing with neural networks.
ISSN:2045-2322
DOI:10.1038/s41598-025-14016-w
Fuente:Science Database