Title
DLBooster: Boosting End-to-End Deep Learning Workflows with Offloading Data Preprocessing Pipelines
Abstract
In recent years, deep learning (DL) has prospered again due to improvements in both computing and learning theory. Emerging studies mostly focus on the acceleration of refining DL models but ignore data preprocessing issues. However, data preprocessing can significantly affect the overall performance of end-to-end DL workflows. Our studies on several image DL workloads show that existing preprocessing backends are quite inefficient: they either perform poorly in throughput (30% degradation) or burn too many (>10) CPU cores. Based on these observations, we propose DLBooster, a high-performance data preprocessing pipeline that selectively offloads key workloads to FPGAs, to fit the stringent demands on data preprocessing for cutting-edge DL applications. Our testbed experiments show that, compared with the existing baselines, DLBooster can achieve 1.35×~2.4× image processing throughput in several DL workloads, but consumes only 1/10 CPU cores. Besides, it also reduces the latency by 1/3 in online image inference.
Year
DOI
Keywords
2019
10.1145/3337821.3337892
Deep learning, FPGAs, cloud computing, data preprocessing
Field
DocType
ISSN
Computer architecture,Computer science,Parallel computing,Testbed,Image processing,Data pre-processing,Boosting (machine learning),Artificial intelligence,Throughput,Deep learning,Multi-core processor,Cloud computing
Conference
978-1-4503-6295-5
ISBN
Citations 
PageRank 
978-1-4503-6295-5
0
0.34
References 
Authors
0
14
Name
Order
Citations
PageRank
Yang Cheng1143.92
Li Dan2197.20
Zhiyuan Guo300.34
Binyao Jiang400.34
jiaxin lin593.28
Xi Fan600.34
Jinkun Geng7239.98
Xinyi Yu800.34
Wei Bai 0001919013.46
Lei Qu1052.42
Ran Shu116111.22
Peng Cheng12315.56
Yongqiang Xiong1370845.84
Jianping Wu14743121.01