Title
Towards Interpretable Natural Language Understanding with Explanations as Latent Variables
Abstract
Recently generating natural language explanations has shown very promising results in not only offering interpretable explanations but also providing additional information and supervision for prediction. However, existing approaches usually require a large set of human annotated explanations for training while collecting a large set of explanations is not only time consuming but also expensive. In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training. Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model. We develop a variational EM framework for optimization where an explanation generation module and an explanation-augmented prediction module are alternatively optimized and mutually enhance each other. Moreover, we further propose an explanation-based self-training method under this framework for semi-supervised learning. It alternates between assigning pseudo-labels to unlabeled data and generating new explanations to iteratively improve each other. Experiments on two natural language understanding tasks demonstrate that our framework can not only make effective predictions in both supervised and semi-supervised settings, but also generate good natural language explanation.
Year
Venue
DocType
2020
NIPS 2020
Conference
Volume
Citations 
PageRank 
33
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Wangchunshu Zhou101.69
Jinyi Hu200.68
Hanlin Zhang300.34
Xiaodan Liang4379.73
Maosong Sun52293162.86
Chen-Yan Xiong640530.82
Jian Tang7132259.93