Title
MASAD: A large-scale dataset for multimodal aspect-based sentiment analysis
Abstract
AbstractAbstractAspect-based sentiment analysis has obtained great success in recent years. Most of the existing work focuses on determining the sentiment polarity of the given aspect according to the given text, while little attention has been paid to the visual information as well as multimodality content for aspect-based sentiment analysis. Multimodal content is becoming increasingly popular in mainstream online social platforms and can help better extract user sentiments toward a given aspect. There are only few studies focusing on this new task: Multimodal Aspect-based Sentiment Analysis (MASA), which performs aspect-based sentiment analysis by integrating both texts and images. In this paper, we propose a mutimodal interaction model for MASA to learn the relationship among the text, image and aspect via interaction layers and adversarial training. Additionally, we build a new large-scale dataset for this task, named MASAD, which involves seven domains and 57 aspect categories with 38 k image-text pairs. Extensive experiments have been conducted on the proposed dataset to provide several baselines for this task. Though our models obtain significant improvement for this task, empirical results show that MASA is more challenging than textual aspect-based sentiment analysis, which indicates that MASA remains a challenging open problem and requires further efforts.
Year
DOI
Venue
2021
10.1016/j.neucom.2021.05.040
Periodicals
Keywords
DocType
Volume
Sentiment analysis, Multimodal, Aspect-based sentiment analysis, Deep learning
Journal
455
Issue
ISSN
Citations 
C
0925-2312
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Jie Zhou112.07
Jiabao Zhao223.74
Jimmy Xiangji Huang300.34
Qinmin Vivian Hu4206.06
Liang He56120.38