Abstract | ||
---|---|---|
Attention mechanisms in deep neural networks have achieved excellent performance on sequence-prediction tasks. Here, we show that these recently-proposed attention-based mechanisms-in particular, the Transformer with its parallelizable self-attention layers, and the Memory Fusion Network with attention across modalities and time-also generalize well to multimodal time-series emotion recognition. Using a recently-introduced dataset of emotional autobiographical narratives, we adapt and apply these two attention mechanisms to predict emotional valence over time. Our models perform extremely well, in some cases reaching a performance comparable with human raters. We end with a discussion of the implications of attention mechanisms to affective computing. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/ACII.2019.8925497 | 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) |
Keywords | Field | DocType |
Deep Learning,Attention,Multimodal Emotion Recognition,Time-series Emotion Recognition | Modalities,Social psychology,Task analysis,Emotion recognition,Visualization,Computer science,Cognitive psychology,Narrative,Affective computing,Artificial neural network,Deep neural networks | Conference |
ISSN | ISBN | Citations |
2156-8103 | 978-1-7281-3889-3 | 0 |
PageRank | References | Authors |
0.34 | 18 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhengxuan Wu | 1 | 1 | 1.03 |
Xiyu Zhang | 2 | 0 | 0.34 |
Zhi-Xuan Tan | 3 | 0 | 0.34 |
Jamil Zaki | 4 | 35 | 6.54 |
Desmond Ong | 5 | 10 | 5.23 |