Title
VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis
Abstract
Detecting the sentiment expressed by a document is a key task for many applications, e.g., modeling user preferences, monitoring consumer behaviors, assessing product quality. Traditionally, the sentiment analysis task primarily relies on textual content. Fueled by the rise of mobile phones that are often the only cameras on hand, documents on the Web (e.g., reviews, blog posts, tweets) are increasingly multimodal in nature, with photos in addition to textual content. A question arises whether the visual component could be useful for sentiment analysis as well. In this work, we propose Visual Aspect Attention Network or VistaNet, leveraging both textual and visual components. We observe that in many cases, with respect to sentiment detection, images play a supporting role to text, highlighting the salient aspects of an entity, rather than expressing sentiments independently of the text. Therefore, instead of using visual information as features, VistaNet relies on visual information as alignment for pointing out the important sentences of a document using attention. Experiments on restaurant reviews showcase the effectiveness of visual aspect attention, vis-a-vis visual features or textual attention.
Year
Venue
Field
2019
AAAI
Information retrieval,Computer science,Sentiment analysis,Artificial intelligence,Machine learning,Salient
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
0
3
Name
Order
Citations
PageRank
Tuan Quoc Truong110.35
Hady Wirawan Lauw280957.64
Quoc-Tuan Truong310.35