Title
Feature-Level and Model-Level Audiovisual Fusion for Emotion Recognition in the Wild
Abstract
Emotion recognition plays an important role in human-computer interaction (HCI) and has been extensively studied for decades. Although tremendous improvements have been achieved for posed expressions, recognizing human emotions in "close-to-real-world" environments remains a challenge. In this paper, we proposed two strategies to fuse information extracted from different modalities, i.e., audio and visual. Specifically, we utilized LBP-TOP, an ensemble of CNNs, and a bi-directional LSTM (BLSTM) to extract features from the visual channel and the OpenSmile toolkit to extract features from the audio channel, respectively. Two kinds of fusion methods, i, e., feature-level fusion and model-level fusion, were developed to utilize the information extracted from the two channels. Experimental results on the EmotiW2018 AFEW dataset have shown that the proposed fusion methods outperform the baseline methods significantly and achieve comparable performance compared with the state-of-the-art methods, where the model-level fusion performs better when one of the channels totally fails.
Year
DOI
Venue
2019
10.1109/MIPR.2019.00089
2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)
Keywords
DocType
Volume
Emotion Recogntion,Audiovisual Fusion,Convolutional Neural Network,Long Short Term Memory
Conference
abs/1906.02728
ISBN
Citations 
PageRank 
978-1-7281-1198-8
2
0.36
References 
Authors
0
9
Name
Order
Citations
PageRank
Jie Cai1574.77
Zibo Meng224813.60
KHAN, AHMED-SHEHAB3313.47
Zhiyuan Li4308.40
James O'Reilly5223.02
Shizhong Han62449.80
Ping Liu735916.70
Min Chen831.72
Yan Tong9111.85