Title
Analyzing Compositionality-Sensitivity of NLI Models
Abstract
Success in natural language inference (NLI) should require a model to understand both lexical and compositional semantics. However, through adversarial evaluation, we find that several state-of-the-art models with diverse architectures are over-relying on the former and fail to use the latter. Further, this compositionality unawareness is not reflected via standard evaluation on current datasets. We show that removing RNNs in existing models or shuffling input words during training does not induce large performance loss despite the explicit removal of compositional information. Therefore, we propose a compositionality-sensitivity testing setup that analyzes models on natural examples from existing datasets that cannot be solved via lexical features alone (i.e., on which a bag-of-words model gives a high probability to one wrong label), hence revealing the models' actual compositionality awareness. We show that this setup not only highlights the limited compositional ability of current NLI models, but also differentiates model performance based on design, e.g., separating shallow bag-of-words models from deeper, linguistically-grounded tree-based models. Our evaluation setup is an important analysis tool: complementing currently existing adversarial and linguistically driven diagnostic evaluations, and exposing opportunities for future work on evaluating models' compositional understanding.
Year
Venue
Field
2018
national conference on artificial intelligence
Principle of compositionality,Computer science,Shuffling,Natural language processing,Artificial intelligence,Machine learning,Adversarial system,Natural language inference
DocType
Volume
Citations 
Journal
abs/1811.07033
1
PageRank 
References 
Authors
0.35
20
3
Name
Order
Citations
PageRank
Yixin Nie1304.24
Yicheng Wang2228.06
Mohit Bansal387163.19