Title
Gradient Backpropagation based Feature Attribution to Enable Explainable-AI on the Edge
Abstract
There has been a recent surge in the field of Explainable AI (XAI) which tackles the problem of providing insights into the behavior of black-box machine learning models. Within this field, feature attribution encompasses methods which assign relevance scores to input features and visualize them as a heatmap. Designing flexible accelerators for multiple such algorithms is challenging since the hardware mapping of these algorithms has not been studied yet. In this work, we first analyze the dataflow of gradient backpropagation based feature attribution algorithms to determine the resource overhead required over inference. The gradient computation is optimized to minimize the memory overhead. Second, we develop a High-Level Synthesis (HLS) based configurable FPGA design that is targeted for edge devices and supports three feature attribution algorithms. Tile based computation is employed to maximally use on-chip resources while adhering to the resource constraints. Representative CNNs are trained on CIFAR-10 dataset and implemented on multiple Xilinx FPGAs using 16-bit fixed-point precision demonstrating flexibility of our library. Finally, through efficient reuse of allocated hardware resources, our design methodology demonstrates a pathway to repurpose inference accelerators to support feature attribution with minimal overhead, thereby enabling real-time XAI on the edge.
Year
DOI
Venue
2022
10.1109/VLSI-SoC54400.2022.9939601
2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC)
Keywords
DocType
ISSN
Convolution Neural Network,Explainable Machine Learning,Back-propagation,Hardware Accelerator,FPGA,High-Level Synthesis (HLS)
Conference
2324-8432
ISBN
Citations 
PageRank 
978-1-6654-9006-1
0
0.34
References 
Authors
3
3
Name
Order
Citations
PageRank
Ashwin Bhat100.34
Adou Sangbone Assoa200.34
Arijit Raychowdhury328448.04