Title
Ensemble Robustness of Deep Learning Algorithms.
Abstract
The question why deep learning algorithms perform so well in practice has attracted increasing research interest. However, most of well-established approaches, such as hypothesis capacity, robustness or sparseness, have not provided complete explanations, due to the high complexity of the deep learning algorithms and their inherent randomness. In this work, we introduce a new approach~textendash~ensemble robustness~textendash~towards characterizing the generalization performance of generic deep learning algorithms. Ensemble robustness concerns robustness of the emph{population} of the hypotheses that may be output by a learning algorithm. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbation is bounded in average, or equivalently, the performance variance of the algorithm is small. Quantifying ensemble robustness of various deep learning algorithms may be difficult analytically. However, extensive simulations for seven common deep learning algorithms for different network architectures provide supporting evidence for our claims. Furthermore, our work explains the good performance of several published deep learning algorithms.
Year
Venue
Field
2016
arXiv: Learning
Population,Computer science,Network architecture,Robustness (computer science),Theoretical computer science,Through-the-lens metering,Artificial intelligence,Deep learning,Ensemble learning,Randomness,Algorithm,Machine learning,Bounded function
DocType
Volume
Citations 
Journal
abs/1602.02389
5
PageRank 
References 
Authors
0.67
10
5
Name
Order
Citations
PageRank
Jiashi Feng12165140.81
Tom Zahavy253.37
Bingyi Kang31389.24
Xu, Huan4111671.73
Shie Mannor53340285.45