Title
Selective Caching: Avoiding Performance Valleys in Massively Parallel Architectures
Abstract
Emerging general purpose graphics processing units (GPGPU) make use of a memory hierarchy very similar to that of modern multi-core processors - they typically have multiple levels of on-chip caches and a DDR-like off-chip main memory. In such massively parallel architectures, caches are expected to reduce the average data access latency by reducing the number of off-chip memory accesses; however, our extensive experimental studies confirm that not all applications utilize the on-chip caches in an efficient manner. Even though GPGPUs are adopted to run a wide range of general purpose applications, the conventional cache management policies are incapable of achieving the optimal performance over different memory characteristics of the applications. This paper first investigates the underlying reasons for inefficiency of common cache management policies in GPGPUs. To address and resolve those issues, we then propose (i) a characterization mechanism to analyze each kernel at runtime and, (ii) a selective caching policy to manage the flow of cache accesses. Evaluation results of the studied platform show that our proposed dynamically reconfigurable cache hierarchy improves the system performance by up to 105% (average of 27%) over a wide range of modern GPGPU applications, which is within 10% of the optimal improvement.
Year
DOI
Venue
2020
10.1109/PDP50117.2020.00051
2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)
Keywords
DocType
ISSN
memory hierarchy,on-chip caches,DDR-like off-chip main memory,massively parallel architectures,off-chip memory accesses,selective caching policy,cache hierarchy,GPGPU applications,general purpose graphics processing units
Conference
1066-6192
ISBN
Citations 
PageRank 
978-1-7281-6583-7
0
0.34
References 
Authors
11
3
Name
Order
Citations
PageRank
Amin Jadidi1775.45
Mahmut T. Kandemir27371568.54
Chita R. Das3146780.03