Title
Amoeba-Cache: Adaptive Blocks for Eliminating Waste in the Memory Hierarchy
Abstract
The fixed geometries of current cache designs do not adapt to the working set requirements of modern applications, causing significant inefficiency. The short block lifetimes and moderate spatial locality exhibited by many applications result in only a few words in the block being touched prior to eviction. Unused words occupy between 17 -- 80% of a 64K L1 cache and between 1% -- 79% of a 1MB private LLC. This effectively shrinks the cache size, increases miss rate, and wastes on-chip bandwidth. Scaling limitations of wires mean that unused-word transfers comprise a large fraction (11%) of on-chip cache hierarchy energy consumption. We propose Amoeba-Cache, a design that supports a variable number of cache blocks, each of a different granularity. Amoeba-Cache employs a novel organization that completely eliminates the tag array, treating the storage array as uniform and morph able between tags and data. This enables the cache to harvest space from unused words in blocks for additional tag storage, thereby supporting a variable number of tags (and correspondingly, blocks). Amoeba-Cache adjusts individual cache line granularities according to the spatial locality in the application. It adapts to the appropriate granularity both for different data objects in an application as well as for different phases of access to the same data. Overall, compared to a fixed granularity cache, the Amoeba-Cache reduces miss rate on average (geometric mean) by 18% at the L1 level and by 18% at the L2 level and reduces L1 -- L2 miss bandwidth by ?46%. Correspondingly, Amoeba-Cache reduces on-chip memory hierarchy energy by as much as 36% (mcf) and improves performance by as much as 50% (art).
Year
DOI
Venue
2012
10.1109/MICRO.2012.42
MICRO
Keywords
Field
DocType
current cache design,adaptive blocks,variable number,unused word,l1 level,individual cache line,on-chip cache hierarchy energy,cache size,memory hierarchy,cache block,fixed granularity cache,l1 cache,energy efficiency,cache architecture
Cache-oblivious algorithm,Cache invalidation,Cache pollution,Cache,Computer science,Parallel computing,Page cache,Cache algorithms,Real-time computing,Cache coloring,Smart Cache
Conference
ISSN
ISBN
Citations 
1072-4451
978-1-4673-4819-5
17
PageRank 
References 
Authors
0.70
21
6
Name
Order
Citations
PageRank
Snehasish Kumar1413.73
Hongzhou Zhao2873.52
Arrvindh Shriraman329217.70
Eric Matthews4464.89
Sandhya Dwarkadas53504257.31
Lesley Shannon61199.10