Title
Towards Visually Explaining Variational Autoencoders
Abstract
Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, \eg, variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methods to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning improved latent space disentanglement, demonstrated on the Dsprites dataset.
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.00867
CVPR
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
35
8
Name
Order
Citations
PageRank
WenQian Liu141.08
Runze Li211.38
Meng Zheng3376.79
Srikrishna Karanam416114.40
Ziyan Wu523121.99
Bir Bhanu63356380.19
Richard J. Radke7128978.89
Octavia I. Camps863143.56