Title
Scalable and Fast Lazy Persistency on GPUs
Abstract
GPUs applications, including many scientific and machine learning applications, increasingly demand larger memory capacity. NVM is promising higher density compared to DRAM and better future scaling potentials. Long running GPU applications can benefit from NVM by exploiting its persistency, allowing crash recovery of data in memory. In this paper, we propose mapping Lazy Persistency (LP) to GPUs and identify the design space of such mapping. We then characterize LP performance on GPUs, varying the checksum type, reduction method, use of locking, and hash table designs. Armed with insights into the performance bottlenecks, we propose a hash table-less method that performs well on hundreds and thousands of threads, achieving persistency with nearly negligible (2.1%) slowdown for a variety of representative benchmarks. We also propose a directive-based programming language support to simplify programming effort for adding LP to GPU applications.
Year
DOI
Venue
2020
10.1109/IISWC50251.2020.00032
2020 IEEE International Symposium on Workload Characterization (IISWC)
Keywords
DocType
ISBN
data crash recovery,directive-based programming language,DRAM,fast lazy persistency mapping,hash table-less method,hash table designs,reduction method,checksum type,LP performance,design space,GPU applications,NVM,memory capacity,machine learning applications
Conference
978-1-7281-7646-8
Citations 
PageRank 
References 
0
0.34
14
Authors
4
Name
Order
Citations
PageRank
Ardhi Wiratama Baskara Yudha111.71
Keiji Kimura212023.20
Huiyang Zhou399463.26
Yan Solihin42057111.56