Title
Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis.
Abstract
Related tasks often have inter-dependence on each other and perform better when solved in a joint framework. In this paper, we present a deep multi-task learning framework that jointly performs sentiment and emotion analysis both. The multi-modal inputs (i.e., text, acoustic and visual frames) of a video convey diverse and distinctive information, and usually do not have equal contribution in the decision making. We propose a context-level inter-modal attention framework for simultaneously predicting the sentiment and expressed emotions of an utterance. We evaluate our proposed approach on CMU-MOSEI dataset for multi-modal sentiment and emotion analysis. Evaluation results suggest that multi-task learning framework offers improvement over the single-task framework. The proposed approach reports new state-of-the-art performance for both sentiment analysis and emotion analysis.
Year
Venue
Field
2019
north american chapter of the association for computational linguistics
Multi-task learning,Sentiment analysis,Computer science,Emotion recognition,Artificial intelligence,Natural language processing,Modal
DocType
Volume
Citations 
Journal
abs/1905.05812
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Md. Shad Akhtar100.34
Dushyant Singh Chauhan242.44
Deepanway Ghosal3213.46
Soujanya Poria4133660.98
Asif Ekbal5737119.31
Pushpak Bhattacharyya6795186.21