Title
Learning Credible Deep Neural Networks with Rationale Regularization
Abstract
Recent explainability related studies have shown that state-of-the-art DNNs do not always adopt correct evidences to make decisions. It not only hampers their generalization but also makes them less likely to be trusted by end-users. In pursuit of developing more credible DNNs, in this paper we propose CREX, which encourages DNN models to focus more on evidences that actually matter for the task at hand, and to avoid overfitting to data-dependent bias and artifacts. Specifically, CREX regularizes the training process of DNNs with rationales, i.e., a subset of features highlighted by domain experts as justifications for predictions, to enforce DNNs to generate local explanations that conform with expert rationales. Even when rationales are not available, CREX still could be useful by requiring the generated explanations to be sparse. Experimental results on two text classification datasets demonstrate the increased credibility of DNNs trained with CREX. Comprehensive analysis further shows that while CREX does not always improve prediction accuracy on the held-out test set, it significantly increases DNN accuracy on new and previously unseen data beyond test set, highlighting the advantage of the increased credibility.
Year
DOI
Venue
2019
10.1109/ICDM.2019.00025
2019 IEEE International Conference on Data Mining (ICDM)
Keywords
Field
DocType
Deep neural network,Explainability,Credibility,Expert rationales
Credibility,Computer science,Regularization (mathematics),Artificial intelligence,Overfitting,Deep neural networks,Machine learning,Test set
Conference
ISSN
ISBN
Citations 
1550-4786
978-1-7281-4605-8
1
PageRank 
References 
Authors
0.35
14
4
Name
Order
Citations
PageRank
Mengnan Du19413.54
Ninghao Liu212112.88
Fan Yang319648.38
Xia Hu42411110.07