Title
On the Generalization Error Bounds of Neural Networks under Diversity-Inducing Mutual Angular Regularization.
Abstract
Recently diversity-inducing regularization methods for latent variable models (LVMs), which encourage the components in LVMs to be diverse, have been studied to address several issues involved in latent variable modeling: (1) how to capture long-tail patterns underlying data; (2) how to reduce model complexity without sacrificing expressivity; (3) how to improve the interpretability of learned patterns. While the effectiveness of diversity-inducing regularizers such as the mutual angular regularizer has been demonstrated empirically, a rigorous theoretical analysis of them is still missing. In this paper, we aim to bridge this gap and analyze how the mutual angular regularizer (MAR) affects the generalization performance of supervised LVMs. We use neural network (NN) as a model instance to carry out the study and the analysis shows that increasing the diversity of hidden units in NN would reduce estimation error and increase approximation error. In addition to theoretical analysis, we also present empirical study which demonstrates that the MAR can greatly improve the performance of NN and the empirical observations are in accordance with the theoretical analysis.
Year
Venue
Field
2015
arXiv: Learning
Interpretability,Latent variable model,Latent variable,Regularization (mathematics),Generalization error,Artificial intelligence,Artificial neural network,Empirical research,Approximation error,Machine learning,Mathematics
DocType
Volume
Citations 
Journal
abs/1511.07110
6
PageRank 
References 
Authors
0.47
23
3
Name
Order
Citations
PageRank
Pengtao Xie133922.63
yuntian deng224114.12
Bo Xing37332471.43