Title
SA-NLI: A Supervised Attention Based Framework for Natural Language Inference
Abstract
Natural Language Inference (NLI) aims to determine the relationship of a pair of sentences. As a critical component of NLI models, attention mechanism has been proven to be effective in the representation and interaction of sentences. However, the attention methods adopted in the existing NLI models are non-parametric or trained inside the model without explicit supervision, and the attention results are poorly explained. In this paper, we propose a Supervised Attention based Natural Language Inference (SA-NLI) framework to solve this problem. In our framework, the intra attention module is trained to focus on syntactically related tokens, while the inter attention module is constrained to capture alignment between sentences. Moreover, the supervised training of intra attention module and inter attention module are unified with the training of the NLI model by multi-task learning and transfer learning, respectively. We conduct extensive experiments on multiple NLI datasets, and the results demonstrate the effectiveness of our supervised attention based method. Further visual analysis validates the interpretability of attention results, and the extended experimental results indicate the generalization of our SA-NLI framework.
Year
DOI
Venue
2020
10.1016/j.neucom.2020.03.092
Neurocomputing
Keywords
DocType
Volume
Natural language inference,Supervised attention,Syntactic information,Alignment
Journal
407
ISSN
Citations 
PageRank 
0925-2312
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Peiguang Li100.68
Hongfeng Yu203.72
Wenkai Zhang304.73
Guangluan Xu453.19
Xian Sun5165.49