Title
Adapted Dynamic Memory Network for Emotion Recognition in Conversation
Abstract
In this article, we address Emotion Recognition in Conversation (ERC) where conversational data are presented in a multimodal setting. Psychological evidence shows that self and inter-speaker influence are two central factors to emotion dynamics in conversation. State-of-the-art models do not effectively synthesise these two factors. Therefore, we propose an Adapted Dynamic Memory Network (A-DMN) where self and inter-speaker influences are modelled individually and further synthesised oriented towards the current utterance. Specifically, we model the dependency of the constituent utterances in a dialogue video using a global RNN to capture inter-speaker influence. Likewise, each speaker is assigned an RNN to capture their self influence. Afterwards, an Episodic Memory Module is devised to extract contexts for self and inter-speaker influence and synthesise them to update the memory. This process repeats itself for multiple passes until a refined representation is obtained and used for final prediction. Additionally, we explore cross-modal fusion in the context of multimodal ERC, and propose a convolution-based method which proves effective in extracting local interactions and computationally efficient. Extensive experiments demonstrate that A-DMN outperforms the state-of-the-art models on benchmark datasets.
Year
DOI
Venue
2022
10.1109/TAFFC.2020.3005660
IEEE Transactions on Affective Computing
Keywords
DocType
Volume
Emotion recognition in conversation,adapted dynamic memory network,multimodal feature fusion
Journal
13
Issue
ISSN
Citations 
3
1949-3045
2
PageRank 
References 
Authors
0.37
21
3
Name
Order
Citations
PageRank
Songlong Xing162.49
Sijie Mai264.87
Haifeng Hu327060.38