Title
Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms
Abstract
The question why deep learning algorithms generalize so well has attracted increasing research interest. However, most of the well-established approaches, such as hypothesis capacity, stability or sparseness, have not provided complete explanations (Zhang et al., 2016; Kawaguchi et al., 2017). In this work, we focus on the robustness approach (Xu u0026 Mannor, 2012), i.e., if the error of a hypothesis will not change much due to perturbations of its training examples, then it will also generalize well. As most deep learning algorithms are stochastic (e.g., Stochastic Gradient Descent, Dropout, and Bayes-by-backprop), we revisit the robustness arguments of Xu u0026 Mannor, and introduce a new approach – ensemble robustness – that concerns the robustness of a population of hypotheses. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbations is bounded in average over training examples. Moreover, an algorithm may be sensitive to some adversarial examples (Goodfellow et al., 2015) but still generalize well. To support our claims, we provide extensive simulations for different deep learning algorithms and different network architectures exhibiting a strong correlation between ensemble robustness and the ability to generalize.
Year
Venue
Field
2018
international conference on learning representations
Population,Stochastic gradient descent,Network architecture,Algorithm,Robustness (computer science),Through-the-lens metering,Artificial intelligence,Deep learning,Ensemble learning,Mathematics,Machine learning,Bounded function
DocType
Citations 
PageRank 
Conference
4
0.43
References 
Authors
3
6
Name
Order
Citations
PageRank
Tom Zahavy1588.81
Bingyi Kang21389.24
Alex Sivak340.43
Jiashi Feng42165140.81
Xu, Huan5111671.73
Shie Mannor63340285.45