Title
Exposing numerical bugs in deep learning via gradient back-propagation
Abstract
ABSTRACTNumerical computation is dominant in deep learning (DL) programs. Consequently, numerical bugs are one of the most prominent kinds of defects in DL programs. Numerical bugs can lead to exceptional values such as NaN (Not-a-Number) and INF (Infinite), which can be propagated and eventually cause crashes or invalid outputs. They occur when special inputs cause invalid parameter values at internal mathematical operations such as log(). In this paper, we propose the first dynamic technique, called GRIST, which automatically generates a small input that can expose numerical bugs in DL programs. GRIST piggy-backs on the built-in gradient computation functionalities of DL infrastructures. Our evaluation on 63 real-world DL programs shows that GRIST detects 78 bugs including 56 unknown bugs. By submitting them to the corresponding issue repositories, eight bugs have been confirmed and three bugs have been fixed. Moreover, GRIST can save 8.79X execution time to expose numerical bugs compared to running original programs with its provided inputs. Compared to the state-of-the-art technique DEBAR (which is a static technique), DEBAR produces 12 false positives and misses 31 true bugs (of which 30 bugs can be found by GRIST), while GRIST only misses one known bug in those programs and no false positive. The results demonstrate the effectiveness of GRIST.
Year
DOI
Venue
2021
10.1145/3468264.3468612
FSE
Keywords
DocType
Citations 
Deep Learning Testing, Numerical Bug, Gradient Back-propagation, Search-based Software Testing
Conference
3
PageRank 
References 
Authors
0.38
0
6
Name
Order
Citations
PageRank
Ming Yan1161.23
Junjie Chen28314.71
Xiangyu Zhang32857151.00
Lin Tan4164867.22
Gan Wang530.72
Zan Wang6447.06