Title
Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent.
Abstract
Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buck-wild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11X. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems.
Year
DOI
Venue
2017
10.1145/3079856.3080248
ISCA
Keywords
Field
DocType
FPGA,Stochastic gradient descent,asynchrony,low precision,multicore
Asynchronous communication,Central processing unit,Stochastic gradient descent,Algorithm design,Computer science,Cache,Parallel computing,Real-time computing,Software,Throughput,Multi-core processor
Conference
Volume
ISBN
Citations 
2017
978-1-5090-5901-0
14
PageRank 
References 
Authors
0.64
33
4
Name
Order
Citations
PageRank
Ré Christopher13422192.34
Ré Christopher23422192.34
Matthew Feldman3361.99
Kunle Olukotun44532373.50