Title
Parallel coordinate descent methods for big data optimization.
Abstract
In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is a simple expression depending on the number of parallel processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there may be no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modeling situations with busy or unreliable processors. We show that our algorithm is able to solve a LASSO problem involving a matrix with 20 billion nonzeros in 2 h on a large memory node with 24 cores.
Year
DOI
Venue
2016
10.1007/s10107-015-0901-6
Mathematical Programming: Series A and B
Keywords
Field
DocType
Parallel coordinate descent, Big data optimization, Partial separability, Huge-scale optimization, Iteration complexity, Expected separable over-approximation, Composite objective, Convex optimization, LASSO, 90C06, 90C25, 49M20, 49M27, 65K05, 68W10, 68W20, 68W40
Discrete mathematics,Mathematical optimization,Matrix (mathematics),Lasso (statistics),Separable space,Convex function,Random coordinate descent,Coordinate descent,Convex optimization,Mathematics,Speedup
Journal
Volume
Issue
ISSN
156
1-2
1436-4646
Citations 
PageRank 
References 
99
3.42
21
Authors
2
Name
Order
Citations
PageRank
Peter Richtárik1131484.53
Martin Takác275249.49