Title
Robust empirical optimization is almost the same as mean-variance optimization.
Abstract
We formulate a distributionally robust optimization problem where the deviation of the alternative distribution is controlled by a ϕ-divergence penalty in the objective, and show that a large class of these problems are essentially equivalent to a mean–variance problem. We also show that while a “small amount of robustness” always reduces the in-sample expected reward, the reduction in the variance, which is a measure of sensitivity to model misspecification, is an order of magnitude larger.
Year
DOI
Venue
2018
10.1016/j.orl.2018.05.005
Operations Research Letters
Keywords
Field
DocType
Robust empirical optimization,Mean–variance optimization,Data-driven optimization, ϕ-divergence,Regularization,Bias–variance trade-off
Probabilistic-based design optimization,Mathematical optimization,Divergence,Robust optimization,Robustness (computer science),Order of magnitude,Mathematics,Kullback–Leibler divergence
Journal
Volume
Issue
ISSN
46
4
0167-6377
Citations 
PageRank 
References 
2
0.36
6
Authors
3
Name
Order
Citations
PageRank
Jun-Ya Gotoh111710.17
Michael Jong Kim2395.03
Andrew E. B. Lim329841.99