Title | ||
---|---|---|
Multimodal Affective Analysis Using Hierarchical Attention Strategy With Word-Level Alignment |
Abstract | ||
---|---|---|
Multimodal affective computing, learning to recognize and interpret human affect and subjective information from multiple data sources, is still challenging because: (i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract levels, ignoring time-dependent interactions between modalities. Addressing such issues, we introduce a hierarchical multimodal architecture with attention and word-level fusion to classify utterance-level sentiment and emotion from text and audio data. Our introduced model outperforms state-of-the-art approaches on published datasets, and we demonstrate that our model's synchronized attention over modalities offers visual interpretability. |
Year | DOI | Venue |
---|---|---|
2018 | 10.18653/v1/p18-1207 | PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1 |
DocType | Volume | ISSN |
Conference | abs/1805.08660 | 0736-587X |
Citations | PageRank | References |
0 | 0.34 | 18 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yue Gu | 1 | 39 | 6.08 |
Kangning Yang | 2 | 14 | 2.00 |
Shiyu Fu | 3 | 7 | 1.16 |
Shuhong Chen | 4 | 49 | 10.21 |
Xinyu Li | 5 | 88 | 37.72 |
Ivan Marsic | 6 | 716 | 91.96 |