Title
EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks
Abstract
Graph neural networks (GNNs) emerge as a powerful approach to process non-euclidean data structures and have been proved powerful in various application domains such as social networks and e-commerce. While such graph data maintained in real-world systems can be extremely large and sparse, thus employing GNNs to deal with them requires substantial computational and memory overhead, which induces considerable energy and resource cost on CPUs and GPUs. In this article, we present a specialized accelerator architecture, EnGN, to enable high-throughput and energy-efficient processing of large-scale GNNs. The proposed EnGN is designed to accelerate the three key stages of GNN propagation, which is abstracted as common computing patterns shared by typical GNNs. To support the key stages simultaneously, we propose the ring-edge-reduce(RER) dataflow that tames the poor locality of sparsely-and-randomly connected vertices, and the RER PE-array to practice RER dataflow. In addition, we utilize a graph tiling strategy to fit large graphs into EnGN and make good use of the hierarchical on-chip buffers through adaptive computation reordering and tile scheduling. Overall, EnGN achieves performance speedup by 1802.9X, 19.75X, and 2.97X and energy efficiency by 1326.35X, 304.43X, and 6.2X on average compared to CPU, GPU, and a state-of-the-art GCN accelerator HyGCN, respectively.
Year
DOI
Venue
2021
10.1109/TC.2020.3014632
IEEE Transactions on Computers
Keywords
DocType
Volume
Graph neural network,accelerator architecture,hardware acceleration
Journal
70
Issue
ISSN
Citations 
9
0018-9340
7
PageRank 
References 
Authors
0.63
0
7
Name
Order
Citations
PageRank
Shengwen Liang181.64
Ying Wang227655.61
Cheng Liu370.96
Lei He470.96
Huawei Li520235.48
Dawen Xu6113.45
Xinrong Li71266157.76