Abstract | ||
---|---|---|
Related tasks often have inter-dependence on each other and perform better when solved in a joint framework. In this paper, we present a deep multi-task learning framework that jointly performs sentiment and emotion analysis both. The multi-modal inputs (i.e., text, acoustic and visual frames) of a video convey diverse and distinctive information, and usually do not have equal contribution in the decision making. We propose a context-level inter-modal attention framework for simultaneously predicting the sentiment and expressed emotions of an utterance. We evaluate our proposed approach on CMU-MOSEI dataset for multi-modal sentiment and emotion analysis. Evaluation results suggest that multi-task learning framework offers improvement over the single-task framework. The proposed approach reports new state-of-the-art performance for both sentiment analysis and emotion analysis. |
Year | Venue | Field |
---|---|---|
2019 | north american chapter of the association for computational linguistics | Multi-task learning,Sentiment analysis,Computer science,Emotion recognition,Artificial intelligence,Natural language processing,Modal |
DocType | Volume | Citations |
Journal | abs/1905.05812 | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Md. Shad Akhtar | 1 | 0 | 0.34 |
Dushyant Singh Chauhan | 2 | 4 | 2.44 |
Deepanway Ghosal | 3 | 21 | 3.46 |
Soujanya Poria | 4 | 1336 | 60.98 |
Asif Ekbal | 5 | 737 | 119.31 |
Pushpak Bhattacharyya | 6 | 795 | 186.21 |