Title
On Fast Convergence of Proximal Algorithms for SQRT-Lasso Optimization: Don't Worry About Its Nonsmooth Loss Function
Abstract
Many machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility. However, by exploring the modeling structures, we find these sacrifices do not always require more computational efforts. To shed light on such a free-lunch phenomenon, we study the square-root-Lasso (SQRT-Lasso) type regression problem. Specifically, we show that the nonsmooth loss functions of SQRT-Lasso type regression ease tuning effort and gain adaptivity to inhomogeneous noise, but is not necessarily more challenging than Lasso in computation. We can directly apply proximal algorithms (e.g. proximal gradient descent, proximal Newton, and proximal Quasi-Newton algorithms) without worrying the nonsmoothness of the loss function. Theoretically, we prove that the proximal algorithms combined with the pathwise optimization scheme enjoy fast convergence guarantees with high probability. Numerical results are provided to support our theory.
Year
Venue
Field
2019
uncertainty in artificial intelligence
Convergence (routing),Mathematical optimization,Gradient descent,Regression,Lasso (statistics),Worry,Algorithm,Robustness (computer science),Phenomenon,Mathematics,Computation
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Xingguo Li19619.95
Haoming Jiang2129.45
Jarvis Haupt31339131.86
R. Arora448935.97
Han Liu543442.70
Mingyi Hong6153391.29
Tuo Zhao722240.58