Title
Multi-Feature Fusion for Multimodal Attentive Sentiment Analysis.
Abstract
Sentiment analysis has been an interesting and challenging task, researchers mostly pay attention to single-modal (image or text) emotion recognition, less attention is paid to joint analysis of multi-modal data. Most existing multi-modal sentiment analysis algorithms combined with attention mechanism focus only on local area of images, ignore the emotional information provided by the global features of the image. Motivated by the research status quo, in this paper, we proposed a novel multi-modal sentiment analysis model, which focuses on local attentive feature also on the global contextual feature from image, then a novel feature fusion mechanism is utilized to fuse features from different modal. In our proposed model, we use a convolutional neural network (CNN) to extract the region maps of images, and use the attention mechanism to acquire attention coefficient, then use a CNN with fewer hidden layers to extract the global feature, a long-short term memory model (LSTM) is utilized to extract textual feature. Finally, a tensor fusion network (TFN) is utilized to fuse all features from different modal. Extensive experiments are conducted on both weakly labeled and manually labeled datasets, and the results demonstrate the superiority of the proposed method.
Year
DOI
Venue
2019
10.1145/3338533.3366591
MMAsia '19: ACM Multimedia Asia Beijing China December, 2019
Field
DocType
ISBN
Computer vision,Feature fusion,Sentiment analysis,Computer science,Artificial intelligence,Natural language processing
Conference
978-1-4503-6841-4
Citations 
PageRank 
References 
0
0.34
0
Authors
6
Name
Order
Citations
PageRank
Man A100.34
Yuanyuan Pu263.54
Dan Xu3142.10
Wenhua Qian431.04
Zhengpeng Zhao501.35
Qiuxia Yang600.34