Comparative Analysis of Mel-Frequency Cepstral Coefficients and Wavelet Based Audio Signal Processing for Emotion Detection and Mental Health Assessment in Spoken Speech

Wedi'i Gadw mewn:
Manylion Llyfryddiaeth
Cyhoeddwyd yn:arXiv.org (Dec 12, 2024), p. n/a
Prif Awdur: Agbo, Idoko
Awduron Eraill: El-Sayed, Hoda, Kamruzzan Sarker, M D
Cyhoeddwyd:
Cornell University Library, arXiv.org
Pynciau:
Mynediad Ar-lein:Citation/Abstract
Full text outside of ProQuest
Tagiau: Ychwanegu Tag
Dim Tagiau, Byddwch y cyntaf i dagio'r cofnod hwn!
Disgrifiad
Crynodeb:The intersection of technology and mental health has spurred innovative approaches to assessing emotional well-being, particularly through computational techniques applied to audio data analysis. This study explores the application of Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) models on wavelet extracted features and Mel-frequency Cepstral Coefficients (MFCCs) for emotion detection from spoken speech. Data augmentation techniques, feature extraction, normalization, and model training were conducted to evaluate the models' performance in classifying emotional states. Results indicate that the CNN model achieved a higher accuracy of 61% compared to the LSTM model's accuracy of 56%. Both models demonstrated better performance in predicting specific emotions such as surprise and anger, leveraging distinct audio features like pitch and speed variations. Recommendations include further exploration of advanced data augmentation techniques, combined feature extraction methods, and the integration of linguistic analysis with speech characteristics for improved accuracy in mental health diagnostics. Collaboration for standardized dataset collection and sharing is recommended to foster advancements in affective computing and mental health care interventions.
ISSN:2331-8422
Ffynhonnell:Engineering Database