Title
A Multi-task Learning Framework for Time-continuous Emotion Estimation from Crowd Annotations
Abstract
We propose Multi-task learning (MTL) for time-continuous or dynamic emotion (valence and arousal) estimation in movie scenes. Since compiling annotated training data for dynamic emotion prediction is tedious, we employ crowdsourcing for the same. Even though the crowdworkers come from various demographics, we demonstrate that MTL can effectively discover (1) consistent patterns in their dynamic emotion perception, and (2) the low-level audio and video features that contribute to their valence, arousal (VA) elicitation. Finally, we show that MTL-based regression models, which simultaneously learn the relationship between low-level audio-visual features and high-level VA ratings from a collection of movie scenes, can predict VA ratings for time-contiguous snippets from each scene more effectively than scene-specific models.
Year
DOI
Venue
2014
10.1145/2660114.2660116
CrowdMM
Keywords
DocType
Citations 
human factors,pattern analysis,human information processing,crowd annotation,time-continuous emotion estimation,multi-task learning,movie clips,multi task learning
Conference
1
PageRank 
References 
Authors
0.35
22
7
Name
Order
Citations
PageRank
Mojtaba Khomami Abadi11125.82
Azad Abad231.39
Ramanathan Subramanian346122.16
Negar Rostamzadeh4336.22
Elisa Ricci 00025139373.75
Jagannadan Varadarajan617611.47
Nicu Sebe77013403.03