Title
Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the Wild.
Abstract
The difficulty of emotion recognition in the wild (EmotiW) is how to train a robust model to deal with diverse scenarios and anomalies. The Audio-video Sub-challenge in EmotiW contains audio-video short clips with several emotional labels and the task is to distinguish which label the video belongs to. For the better emotion recognition in videos, we propose a multiple spatio-temporal feature fusion (MSFF) framework, which can more accurately depict emotional information in spatial and temporal dimensions by two mutually complementary sources, including the facial image and audio. The framework is consisted of two parts: the facial image model and the audio model. With respect to the facial image model, three different architectures of spatial-temporal neural networks are employed to extract discriminative features about different emotions in facial expression images. Firstly, the high-level spatial features are obtained by the pre-trained convolutional neural networks (CNN), including VGG-Face and ResNet-50 which are all fed with the images generated by each video. Then, the features of all frames are sequentially input to the Bi-directional Long Short-Term Memory (BLSTM) so as to capture dynamic variations of facial appearance textures in a video. In addition to the structure of CNN-RNN, another spatio-temporal network, namely deep 3-Dimensional Convolutional Neural Networks (3D CNN) by extending the 2D convolution kernel to 3D, is also applied to attain evolving emotional information encoded in multiple adjacent frames. For the audio model, the spectrogram images of speech generated by preprocessing audio, are also modeled in a VGG-BLSTM framework to characterize the affective fluctuation more efficiently. Finally, a fusion strategy with the score matrices of different spatio-temporal networks gained from the above framework is proposed to boost the performance of emotion recognition complementally. Extensive experiments show that the overall accuracy of our proposed MSFF is 60.64%, which achieves a large improvement compared with the baseline and outperform the result of champion team in 2017.
Year
DOI
Venue
2018
10.1145/3242969.3264992
ICMI
Keywords
Field
DocType
Emotion Recognition, Spatio-Temporal Information, Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), 3D Convolutional Neural Networks (3D CNN)
Computer vision,Pattern recognition,Computer science,Convolutional neural network,Spectrogram,Facial expression,Preprocessor,Artificial intelligence,Artificial neural network,Kernel (image processing),Discriminative model,Feature learning
Conference
ISBN
Citations 
PageRank 
978-1-4503-5692-3
6
0.45
References 
Authors
19
7
Name
Order
Citations
PageRank
Cheng Lu1526.33
Wenming Zheng2124080.70
Chaolong Li3172.31
Chuangao Tang4284.25
Suyuan Liu561.12
Simeng Yan660.45
Yuan Zong760.79