Title | ||
---|---|---|
On Fast Convergence of Proximal Algorithms for SQRT-Lasso Optimization: Don't Worry About Its Nonsmooth Loss Function |
Abstract | ||
---|---|---|
Many machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility. However, by exploring the modeling structures, we find these sacrifices do not always require more computational efforts. To shed light on such a free-lunch phenomenon, we study the square-root-Lasso (SQRT-Lasso) type regression problem. Specifically, we show that the nonsmooth loss functions of SQRT-Lasso type regression ease tuning effort and gain adaptivity to inhomogeneous noise, but is not necessarily more challenging than Lasso in computation. We can directly apply proximal algorithms (e.g. proximal gradient descent, proximal Newton, and proximal Quasi-Newton algorithms) without worrying the nonsmoothness of the loss function. Theoretically, we prove that the proximal algorithms combined with the pathwise optimization scheme enjoy fast convergence guarantees with high probability. Numerical results are provided to support our theory. |
Year | Venue | Field |
---|---|---|
2019 | uncertainty in artificial intelligence | Convergence (routing),Mathematical optimization,Gradient descent,Regression,Lasso (statistics),Worry,Algorithm,Robustness (computer science),Phenomenon,Mathematics,Computation |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xingguo Li | 1 | 96 | 19.95 |
Haoming Jiang | 2 | 12 | 9.45 |
Jarvis Haupt | 3 | 1339 | 131.86 |
R. Arora | 4 | 489 | 35.97 |
Han Liu | 5 | 434 | 42.70 |
Mingyi Hong | 6 | 1533 | 91.29 |
Tuo Zhao | 7 | 222 | 40.58 |