Title
SparseP: Efficient Sparse Matrix Vector Multiplication on Real Processing-In-Memory Architectures
Abstract
Sparse Matrix Vector Multiplication (SpMV) is one of the most thoroughly studied scientific computation kernels, be-cause it lies at the heart of many important applications from the scientific computing, machine learning, and graph analyt-ics domains. SpMV performs indirect memory references as a result of storing the sparse matrix in a compressed format, and irregular memory accesses to the input vector due to the spar-sity pattern of the input matrix [1]–[3]. Thus, in CPU and GPU systems, SpMV is a primarily memory-bandwidth-bound ker-nel for the majority of real sparse matrices, and is bottlenecked by data movement between memory and processors [3]–[6].
Year
DOI
Venue
2022
10.1109/ISVLSI54635.2022.00063
2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)
Keywords
DocType
ISSN
sparse matrices,processors,SparseP,efficient sparse Matrix Vector Multiplication,Real Processing-In-Memory Architectures,SpMV,thoroughly studied scientific computation kernels,scientific computing,machine learning,analyt-ics domains,indirect memory references,irregular memory accesses,input matrix,primarily memory-bandwidth-bound ker-nel
Conference
2159-3469
ISBN
Citations 
PageRank 
978-1-6654-6606-6
0
0.34
References 
Authors
59
6
Name
Order
Citations
PageRank
Christina Giannoula100.34
Ivan Fernandez200.34
Juan Gómez-Luna3223.88
N. Koziris41015107.53
Georgios Goumas526822.03
Onur Mutlu69446357.40