Title
A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems
Abstract
Matrix factorization is known to be an effective method for recommender systems that are given only the ratings from users to items. Currently, stochastic gradient (SG) method is one of the most popular algorithms for matrix factorization. However, as a sequential approach, SG is difficult to be parallelized for handling web-scale problems. In this article, we develop a fast parallel SG method, FPSG, for shared memory systems. By dramatically reducing the cache-miss rate and carefully addressing the load balance of threads, FPSG is more efficient than state-of-the-art parallel algorithms for matrix factorization.
Year
DOI
Venue
2015
10.1145/2668133
ACM TIST
Keywords
Field
DocType
shared memory algorithm,recommender system,matrix factorization,parallel computing,mathematical software,stochastic gradient descent
Recommender system,Stochastic gradient descent,Shared memory,Effective method,Computer science,Parallel algorithm,Load balancing (computing),Parallel computing,Matrix decomposition,Thread (computing)
Journal
Volume
Issue
ISSN
6
1
2157-6904
Citations 
PageRank 
References 
26
0.80
14
Authors
4
Name
Order
Citations
PageRank
Wei-Sheng Chin12368.76
Yong Zhuang225413.88
Yu-Chin Juan32529.54
Chih-Jen Lin4202861475.84