Title
Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods
Abstract
Interpreting deep neural networks is of great importance to understand and verify deep models for natural language processing (NLP) tasks. However, most existing approaches only focus on improving the performance of models but ignore their interpretability. In this work, we propose an approach to investigate the meaning of hidden neurons of the convolutional neural network (CNN) models. We first employ saliency map and optimization techniques to approximate the detected information of hidden neurons from input sentences. Then we develop regularization terms and explore words in vocabulary to interpret such detected information. Experimental results demonstrate that our approach can identify meaningful and reasonable interpretations for hidden spatial locations. Additionally, we show that our approach can describe the decision procedure of deep NLP models.
Year
Venue
Field
2019
THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE
Interpretability,Text mining,Saliency map,Convolutional neural network,Computer science,Regularization (mathematics),Artificial intelligence,Vocabulary,Deep neural networks,Machine learning
DocType
Citations 
PageRank 
Conference
1
0.40
References 
Authors
0
4
Name
Order
Citations
PageRank
Hao Yuan1276.64
Yongjun Chen251.19
Xia Hu32411110.07
Shuiwang Ji42579122.25