Title
Using syntactical and logical forms to evaluate textual inference competence.
Abstract
In the light of recent breakthroughs in transfer learning for Natural Language Processing, much progress was achieved on Natural Language Inference. Different models are now presenting high accuracy on popular inference datasets such as SNLI, MNLI and SciTail. At the same time, there are different indicators that those datasets can be exploited by using some simple linguistic patterns. This fact poses difficulties to our understanding of the actual capacity of machine learning models to solve the complex task of textual inference. We propose a new set of tasks that require specific capacities over linguistic logical forms such as: i) Boolean coordination, ii) quantifiers, iii) definitive description, and iv) counting operators. By evaluating a model on our stratified dataset, we can better pinpoint the specific inferential difficulties of a model in each kind of textual structure. We evaluate two kinds of neural models that implicitly exploit language structure: recurrent models and the Transformer network BERT. We show that although BERT is clearly more efficient to generalize over most logical forms, there is space for improvement when dealing with counting operators.
Year
Venue
DocType
2019
arXiv: Computation and Language
Journal
Volume
Citations 
PageRank 
abs/1905.05704
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Felipe Salvatore100.68
Marcelo Finger23010.09
Roberto Hirata Jr.3137.80