Title
Regularization via Mass Transportation
Abstract
The goal of regression and classification methods in supervised learning is to minimize the empirical risk, that is, the expectation of some loss function quantifying the prediction error under the empirical distribution. When facing scarce training data, over fitting is typically mitigated by adding regularization terms to the objective that penalize hypothesis complexity. In this paper we introduce new regularization techniques using ideas from distributionally robust optimization, and we give new probabilistic interpretations to existing techniques. Specifically, we propose to minimize the worst-case expected loss, where the worst case is taken over the ball of all (continuous or discrete) distributions that have a bounded transportation distance from the (discrete) empirical distribution. By choosing the radius of this ball judiciously, we can guarantee that the worst-case expected loss provides an upper confidence bound on the loss on test data, thus offering new generalization bounds. We prove that the resulting regularized learning problems are tractable and can be tractably kernelized for many popular loss functions. The proposed approach to regluarization is also extended to neural networks. We validate our theoretical out-of-sample guarantees through simulated and empirical experiments.
Year
Venue
Keywords
2017
JOURNAL OF MACHINE LEARNING RESEARCH
Distributionally robust optimization,optimal transport,Wasserstein distance,robust optimization,regularization
Field
DocType
Volume
Expected loss,Mathematical optimization,Empirical distribution function,Robust optimization,Empirical risk minimization,Supervised learning,Regularization (mathematics),Overfitting,Mathematics,Regularization perspectives on support vector machines
Journal
20
Issue
ISSN
Citations 
103
1532-4435
6
PageRank 
References 
Authors
0.50
11
3
Name
Order
Citations
PageRank
Soroosh Shafieezadeh-Abadeh1333.01
Daniel Kuhn255932.80
Peyman Mohajerin Esfahani320620.74