Title
Fleche: an efficient GPU embedding cache for personalized recommendations
Abstract
ABSTRACTDeep learning based models have dominated current production recommendation systems. However, the gap between CPU-side DRAM data accessing and GPU processing still impedes their inference performance. GPU-resident cache can bridge this gap, but we find that existing systems leave the benefits to cache the embedding table, a huge sparse structure, on GPU unexploited. In this paper, we present Fleche, a holistic cache scheme with detailed designs for efficient GPU-resident embedding caching. Fleche (1) uses one cache backend for all embedding tables to improve the total cache utilization, and (2) merges small kernel calls into one unitary call to reduce the overhead of kernel maintenance (e.g., kernel launching and synchronizing). Furthermore, we carefully design the cache query workflow for finer-grain parallelism. Evaluations with real-world datasets show that compared with the prior art, Fleche significantly improves the throughput of embedding layer by 2.0 -- 5.4×, and gets up to 2.4× speedup of end-to-end inference throughput.
Year
DOI
Venue
2022
10.1145/3492321.3519554
European Conference on Computer Systems
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Minhui Xie100.34
Youyou Lu235630.81
Jiazhen Lin300.34
Qing Wang455.47
Jian Gao500.34
Kai Ren600.34
Jiwu Shu770972.71