Title
A Multimodal Lstm For Predicting Listener Empathic Responses Over Time
Abstract
People naturally understand the emotions of-and often also empathize with-those around them. In this paper, we predict the emotional valence of an empathic listener over time as they listen to a speaker narrating a life story. We use the dataset provided by the OMG-Empathy Prediction Challenge, a workshop held in conjunction with IEEE FG 2019. We present a multimodal LSTM model with feature-level fusion and local attention that predicts empathic responses from audio, text, and visual features. Our best-performing model, which used only the audio and text features, achieved a concordance correlation coefficient ( CCC) of .29 and .32 on the Validation set for the Generalized and Personalized track respectively, and achieved a CCC of .14 and .14 on the held-out Test set. We discuss the difficulties faced and the lessons learnt tackling this challenge.
Year
DOI
Venue
2018
10.1109/fg.2019.8756577
2019 14TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2019)
Field
DocType
Volume
Computer science,Concordance correlation coefficient,Natural language processing,Artificial intelligence,Test set
Journal
abs/1812.04891
ISSN
Citations 
PageRank 
2326-5396
1
0.35
References 
Authors
11
4
Name
Order
Citations
PageRank
Zong Xuan Tan110.35
Arushi Goel241.41
Thanh-Son Nguyen310.35
Desmond Ong4105.23