Title
AKG: automatic kernel generation for neural processing units using polyhedral transformations
Abstract
ABSTRACTExisting tensor compilers have proven their effectiveness in deploying deep neural networks on general-purpose hardware like CPU and GPU, but optimizing for neural processing units (NPUs) is still challenging due to the heterogeneous compute units and complicated memory hierarchy. In this paper, we present AKG, a tensor compiler for NPUs. AKG first lowers the tensor expression language to a polyhedral representation, which is used to automate the memory management of NPUs. Unlike existing approaches that resort to manually written schedules, AKG leverages polyhedral schedulers to perform a much wider class of transformations, and extends the semantics of the polyhedral representation to combine complex tiling techniques and hierarchical fusion strategies. We also implement the domain-specific optimization of convolution in AKG. Moreover, to achieve the optimal performance, we introduce complementary optimizations in code generation, which is followed by an auto-tuner. We conduct extensive experiments on benchmarks ranging from single operators to end-to-end networks. The experimental results show that AKG can obtain superior performance to both manual scheduling approaches and vendor provided libraries. We believe AKG will cast a light on the follow-up compiler works on NPUs.
Year
DOI
Venue
2021
10.1145/3453483.3454106
PLDI
Keywords
DocType
Citations 
neural networks, neural processing units, polyhedral model, code generation, auto-tuning
Conference
2
PageRank 
References 
Authors
0.36
0
13
Name
Order
Citations
PageRank
Jie Zhao153.44
Bojie Li2635.85
Wang Nie320.36
Zhen Geng420.36
Renwei Zhang520.69
Xiong Gao620.36
Bin Cheng741.49
Wu Chen85919.58
Yun Cheng920.36
Zheng Li1020.36
Peng Di1120.36
Kun Zhang1220.36
Xuefeng Jin1320.36