Title
Enabling massive deep neural networks with the GraphBLAS
Abstract
Deep Neural Networks (DNNs) have emerged as a core tool for machine learning. The computations performed during DNN training and inference are dominated by operations on the weight matrices describing the DNN. As DNNs incorporate more stages and more nodes per stage, these weight matrices may be required to be sparse because of memory limitations. The GraphBLAS.org math library standard was developed to provide high performance manipulation of sparse weight matrices and input/output vectors. For sufficiently sparse matrices, a sparse matrix library requires significantly less memory than the corresponding dense matrix implementation. This paper provides a brief description of the mathematics underlying the GraphBLAS. In addition, the equations of a typical DNN are rewritten in a form designed to use the GraphBLAS. An implementation of the DNN is given using a preliminary GraphBLAS C library. The performance of the GraphBLAS implementation is measured relative to a standard dense linear algebra library implementation. For various sizes of DNN weight matrices, it is shown that the GraphBLAS sparse implementation outperforms a BLAS dense implementation as the weight matrix becomes sparser.
Year
DOI
Venue
2017
10.1109/HPEC.2017.8091098
2017 IEEE High Performance Extreme Computing Conference (HPEC)
Keywords
DocType
Volume
GraphBLAS.org math library standard,sparse weight matrices,input/output vectors,sparse matrix library,standard dense linear algebra library implementation,DNN weight matrices,GraphBLAS sparse implementation,machine learning,deep neural networks,GraphBLAS C library
Journal
abs/1708.02937
ISSN
ISBN
Citations 
2377-6943
978-1-5386-3473-8
17
PageRank 
References 
Authors
0.92
31
6
Name
Order
Citations
PageRank
Jeremy Kepner160661.58
Manoj Kumar2732104.98
José E. Moreira32282230.26
Pratap Pattnaik431527.89
Mauricio J. Serrano551155.17
Henry M. Tufo611313.95