Title
Reasoning With Sarcasm By Reading In-Between
Abstract
Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit. The prevalence of sarcasm on the social web is highly disruptive to opinion mining systems due to not only its tendency of polarity flipping but also usage of figurative language. Sarcasm commonly manifests with a contrastive theme either between positive-negative sentiments or between literal-figurative scenarios. In this paper, we revisit the notion of modeling contrast in order to reason with sarcasm. More specifically, we propose an attention-based neural model that looks in between instead of across, enabling it to explicitly model contrast and incongruity. We conduct extensive experiments on six benchmark datasets from Twitter, Reddit and the Internet Argument Corpus. Our proposed model not only achieves state-of-the-art performance on all datasets but also enjoys improved interpretability.
Year
DOI
Venue
2018
10.18653/v1/p18-1093
PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1
Field
DocType
Volume
Interpretability,Sarcasm,Social web,Sentiment analysis,Computer science,Artificial intelligence,Natural language processing,Speech act,Literal and figurative language,The Internet
Journal
abs/1805.02856
Citations 
PageRank 
References 
1
0.34
23
Authors
4
Name
Order
Citations
PageRank
Yi Tay122928.97
Anh Tuan Luu217711.34
Siu Cheung Hui3110686.71
Jian Su4166893.47