Title
LrGAN: A Compact and Energy Efficient PIM-Based Architecture for GAN Training
Abstract
As a powerful unsupervised learning method, Generative Adversarial Network (GAN) plays an essential role in many domains. However, training a GAN imposes four more challenges: (1) intensive communication caused by complex train phases of GAN; (2) much more ineffectual computations caused by peculiar convolutions; (3) more frequent off-chip memory accesses for exchanging intermediate data between the generator and the discriminator; and (4) high energy consumption of unnecessary fine-grained MLC programming. In this article, we propose LrGAN, a PIM-based GAN accelerator, to address the challenges of training GAN. We first propose a zero-free data reshaping scheme for ReRAM-based PIM, which removes the zero-related computations. We then propose a 3D-connected PIM, which can reconfigure connections inside PIM dynamically according to dataflows of propagation and updating. After that, we propose an approximate weight update algorithm to avoid unnecessary fine-grain MLC programming. Finally, we propose LrGAN based on these three techniques, providing different levels of accelerating GAN for programmers. Experiments show that LrGAN achieves 47.2×, 21.42×, and 7.46× speedup over FPGA-based GAN accelerator, GPU platform, and ReRAM-based neural network accelerator respectively. Besides, LrGAN achieves 13.65×, 10.75×, and 1.34× energy saving on average over GPU platform, PRIME, and FPGA-based GAN accelerator, respectively.
Year
DOI
Venue
2021
10.1109/TC.2020.3011122
IEEE Transactions on Computers
Keywords
DocType
Volume
Processing in memory,generative adversarial network,approximate computing,non-volatile memory
Journal
70
Issue
ISSN
Citations 
9
0018-9340
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Haiyu Mao191.81
Jiwu Shu270972.71
Mingcong Song3595.42
Tao Li476147.52