Title
Context- and Knowledge-Aware Graph Convolutional Network for Multimodal Emotion Recognition
Abstract
This work proposes an approach for emotion recognition in conversation that leverages context modeling, knowledge enrichment, and multimodal (text and audio) learning based on a graph convolutional network (GCN). We first construct two distinctive graphs for modeling the contextual interaction and knowledge dynamic. We then introduce an affective lexicon into knowledge graph building to enrich the emotional polarity of each concept, that is the related knowledge of each token in an utterance. Then, we achieve a balance between the context and the affect-enriched knowledge by incorporating them into the new adjacency matrix construction of the GCN architecture, and teach them jointly with multiple modalities to effectively structure the semantics-sensitive and knowledge-sensitive contextual dependence of each conversation. Our model outperforms the state-of-the-art benchmarks by over 22.6% and 11% relative error reduction in terms of weighted-F1 on the IEMOCAP and MELD databases, respectively, demonstrating the superiority of our method in emotion recognition.
Year
DOI
Venue
2022
10.1109/MMUL.2022.3173430
IEEE MultiMedia
Keywords
DocType
Volume
Emotion recognition, Context modeling, Semantics, Oral communication, Knowledge based systems, Task analysis, Social networking (online)
Journal
29
Issue
ISSN
Citations 
3
1070-986X
0
PageRank 
References 
Authors
0.34
6
7
Name
Order
Citations
PageRank
Yahui Fu100.34
Shogo Okada200.34
Longbiao Wang327244.38
Lili Guo400.68
Yaodong Song501.35
Jia-Xing Liu615.08
Jianwu Dang701.69