Emotional Analysis and Interpretation of Music Conducting Works Based on Artificial Intelligence

Guardado en:
Detalles Bibliográficos
Publicado en:International Journal of Advanced Computer Science and Applications vol. 16, no. 7 (2025)
Autor principal: PDF
Publicado:
Science and Information (SAI) Organization Limited
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Emotional expression of music conductor works is the core of music performance. Based on deep learning technology, this study puts forward an emotional analysis method of music conductor works, constructs a complete framework of audio feature extraction, emotional classification, model optimization and evaluation, selects different styles of music conductor works, extracts audio features by using short-time Fourier transform and mel-frequency cepstral coefficients, and classifies emotional categories by using convolutional neural network with bidirectional long short-term memory structure. The experimental results show that the model performs well in the recognition of joy, sadness and tranquility, and the accuracy and F1-score both reach a high level. Different styles of works have differences in emotional classification; classical works tend to be quiet and happy, and romantic works account for a higher proportion in the category of sadness. The change of command style has an impact on the results of emotion classification, and the treatment of rhythm, strength and timbre by different conductors leads to differences in emotion recognition of the same works. The research results provide a new methodological support for music emotional computing, and have application value in music education, intelligent recommendation, emotional computing and other fields. The experimental results demonstrate high effectiveness, with an average classification accuracy of 88.5% and an F1-score exceeding 0.87 across core emotional categories. These findings provide methodological support for affective computing in music, with practical applications in music education, intelligent recommendation, and affective computing. Future research will optimize the model structure and combine multimodal data to improve the accuracy of music emotion recognition, providing a broader research space for the combination of music analysis, interpretation technology, and artificial intelligence.
ISSN:2158-107X
2156-5570
DOI:10.14569/IJACSA.2025.0160709
Fuente:Advanced Technologies & Aerospace Database