Title
Interpretable Neuron Structuring with Graph Spectral Regularization.
Abstract
While neural networks are powerful approximators used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features. Here we propose Graph Spectral Regularization for making hidden layers more interpretable without significantly impacting performance on the primary task. Taking inspiration from spatial organization and localization of neuron activations in biological networks, we use a graph Laplacian penalty to structure the activations within a layer. This penalty encourages activations to be smooth either on a predetermined graph or on a feature-space graph learned from the data via co-activations of a hidden layer of the neural network. We show numerous uses for this additional structure including cluster indication and visualization in biological and image data sets.
Year
DOI
Venue
2020
10.1007/978-3-030-44584-3_40
IDA
Keywords
DocType
Volume
Feature saliency,Graph learning,Neural Network Interpretability
Conference
12080
Citations 
PageRank 
References 
0
0.34
0
Authors
9
Name
Order
Citations
PageRank
Alexander Tong101.69
David van Dijk2405.77
Jay S. Stanley300.34
Matthew Amodio452.44
Kristina Yim500.34
Rebecca Muhle600.34
James Noonan700.34
Guy Wolf8411.55
Smita Krishnaswamy913.06