Abstract | ||
---|---|---|
This paper summarizes the idea of ChargeCache, which was published in HPCA 2016 [51], and examines the worku0027s significance and future potential. DRAM latency continues to be a critical bottleneck for system performance. In this work, we develop a low-cost mechanism, called ChargeCache, that enables faster access to recently-accessed rows in DRAM, with no modifications to DRAM chips. Our mechanism is based on the key observation that a recently-accessed row has more charge and thus the following access to the same row can be performed faster. To exploit this observation, we propose to track the addresses of recently-accessed rows in a table in the memory controller. If a later DRAM request hits in that table, the memory controller uses lower timing parameters, leading to reduced DRAM latency. Row addresses are removed from the table after a specified duration to ensure rows that have leaked too much charge are not accessed with lower latency. We evaluate ChargeCache on a wide variety of workloads and show that it provides significant performance and energy benefits for both single-core and multi-core systems. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Hardware Architecture | Dram,Row,Bottleneck,Locality of reference,Latency (engineering),Computer science,Parallel computing,Exploit,Memory controller |
DocType | Volume | Citations |
Journal | abs/1805.03969 | 0 |
PageRank | References | Authors |
0.34 | 31 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hasan Hassan | 1 | 352 | 17.76 |
Gennady Pekhimenko | 2 | 706 | 28.75 |
Nandita Vijaykumar | 3 | 146 | 7.55 |
Vivek Seshadri | 4 | 992 | 32.76 |
Dong-Hyuk Lee | 5 | 1254 | 48.26 |
Oguz Ergin | 6 | 424 | 25.84 |
Onur Mutlu | 7 | 9446 | 357.40 |