Title
Neural Module Networks for Reasoning over Text
Abstract
Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Neural module networks (NMNs) learn to parse such questions as executable programs composed of learnable modules, performing well on synthetic visual QA domains. However, we find that it is challenging to learn these models for non-synthetic questions on open-domain text, where a model needs to deal with the diversity of natural language and perform a broader range of reasoning. We extend NMNs by: (a) introducing modules that reason over a paragraph of text, performing symbolic reasoning (such as arithmetic, sorting, counting) over numbers and dates in a probabilistic and differentiable manner; and (b) proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text. Additionally, we show that a limited amount of heuristically-obtained question program and intermediate module output supervision provides sufficient inductive bias for accurate learning. Our proposed model significantly outperforms state-of-the-art models on a subset of the DROP dataset that poses a variety of reasoning challenges that are covered by our modules.
Year
Venue
Keywords
2020
ICLR
question answering, compositionality, neural module networks, multi-step reasoning, reading comprehension
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
17
5
Name
Order
Citations
PageRank
Otkrist Gupta11179.81
Kevin Lin211.36
Dan Roth37735695.19
Sameer Singh4106071.63
Matthew Gardner570438.49