Title
Semi-supervised Multi-modal Emotion Recognition with Cross-Modal Distribution Matching
Abstract
Automatic emotion recognition is an active research topic with wide range of applications. Due to the high manual annotation cost and inevitable label ambiguity, the development of emotion recognition dataset is limited in both scale and quality. Therefore, one of the key challenges is how to build effective models with limited data resource. Previous works have explored different approaches to tackle this challenge including data enhancement, transfer learning, and semi-supervised learning etc. However, the weakness of these existing approaches includes such as training instability, large performance loss during transfer, or marginal improvement. In this work, we propose a novel semi-supervised multi-modal emotion recognition model based on cross-modality distribution matching, which leverages abundant unlabeled data to enhance the model training under the assumption that the inner emotional status is consistent at the utterance level across modalities. We conduct extensive experiments to evaluate the proposed model on two benchmark datasets, IEMOCAP and MELD. The experiment results prove that the proposed semi-supervised learning model can effectively utilize unlabeled data and combine multi-modalities to boost the emotion recognition performance, which outperforms other state-of-the-art approaches under the same condition. The proposed model also achieves competitive capacity compared with existing approaches which take advantage of additional auxiliary information such as speaker and interaction context.
Year
DOI
Venue
2020
10.1145/3394171.3413579
MM '20: The 28th ACM International Conference on Multimedia Seattle WA USA October, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7988-5
1
PageRank 
References 
Authors
0.35
0
3
Name
Order
Citations
PageRank
Jingjun Liang141.75
Ruichen Li232.08
Qin Jin363966.86