Title
A fast parallel SGD for matrix factorization in shared memory systems
Abstract
Matrix factorization is known to be an effective method for recommender systems that are given only the ratings from users to items. Currently, stochastic gradient descent (SGD) is one of the most popular algorithms for matrix factorization. However, as a sequential approach, SGD is difficult to be parallelized for handling web-scale problems. In this paper, we develop a fast parallel SGD method, FPSGD, for shared memory systems. By dramatically reducing the cache-miss rate and carefully addressing the load balance of threads, FPSGD is more efficient than state-of-the-art parallel algorithms for matrix factorization.
Year
DOI
Venue
2013
10.1145/2507157.2507164
RecSys
Keywords
Field
DocType
sequential approach,fast parallel sgd method,cache-miss rate,recommender system,load balance,popular algorithm,shared memory system,matrix factorization,state-of-the-art parallel algorithm,effective method,stochastic gradient descent,parallel computing
Recommender system,Stochastic gradient descent,Shared memory,Effective method,Parallel algorithm,Computer science,Load balancing (computing),Parallel computing,Matrix decomposition,Theoretical computer science,Thread (computing)
Conference
Citations 
PageRank 
References 
76
2.03
11
Authors
4
Name
Order
Citations
PageRank
Yong Zhuang125413.88
Wei-Sheng Chin22368.76
Yu-Chin Juan32529.54
Chih-Jen Lin4202861475.84