Title
An Emotion-embedded Visual Attention Model for Dimensional Emotion Context Learning
Abstract
Dimensional emotion recognition has attracted more and more researchers' attention from various fields including psychology, cognition, and computer science. In this paper, we propose an emotion-embedded visual attention model (EVAM) to learn emotion context information for predicting affective dimension values from video sequences. First, deep CNN is used to generate a high-level representation of the raw face images. Second, a visual attention model based on the gated recurrent unit (GRU) is employed to learn the context information of the feature sequences from face features. Third, the k-means algorithm is adapted to embed previous emotion into attention model to produce more robust time series predictions, which emphasize the influence of previous emotion on current effective prediction. In this paper, all experiments are carried out on database AVEC 2016 and AVEC 2017. The experimental results validate the efficiency of our method, and competitive results are obtained.
Year
DOI
Venue
2019
10.1109/ACCESS.2019.2911714
IEEE ACCESS
Keywords
Field
DocType
Dimensional emotion,attention mechanism,context learning
Computer science,Visual attention,Human–computer interaction,Distributed computing
Journal
Volume
ISSN
Citations 
7
2169-3536
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Yuhao Tang100.34
Qirong Mao226134.29
Hongjie Jia300.68
Heping Song411.02
Yongzhao Zhan534451.09