Intra-modal Relation and Emotional Incongruity Learning using Graph Attention Networks for Multimodal Sarcasm Detection

Guardado en:
Detalles Bibliográficos
Publicado en:The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Conference Proceedings (2025), p. 1-5
Autor principal: Raghuvanshi, Devraj
Otros Autores: Gao, Xiyuan, Zhu, Li, Bansal, Shubhi, Coler, Matt, Kumar, Nagendra, Nayak, Shekhar
Publicado:
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Materias:
Acceso en línea:Citation/Abstract
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Conference Title: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)Conference Start Date: 2025 April 6Conference End Date: 2025 April 11Conference Location: Hyderabad, IndiaSarcasm detection poses unique challenges due to the complex nature of sarcastic expressions often embedded across multiple modalities. Current methods frequently fall short in capturing the incongruent emotional cues that are essential for identifying sarcasm in multimodal contexts. In this paper, we present a novel method to capture the pair-wise emotional incongruities between modalities through a cross-modal Contrastive Attention Mechanism (CAM), leveraging advanced data augmentation techniques to enhance data diversity and Supervised Contrastive Learning (SCL) to obtain discriminative embeddings. Additionally, we employ Graph Attention Networks (GATs) to construct modality-specific graphs, capturing intra-modal dependencies. Experiments conducted on the MUStARD++ dataset demonstrate the efficacy of our approach, achieving a macro F1 score of 74.96%, which outperforms state-of-the-art methods.
DOI:10.1109/ICASSP49660.2025.10887864
Fuente:Science Database