Title
A fully associative, tagless DRAM cache
Abstract
This paper introduces a tagless cache architecture for large in-package DRAM caches. The conventional die-stacked DRAM cache has both a TLB and a cache tag array, which are responsible for virtual-to-physical and physical-to-cache address translation, respectively. We propose to align the granularity of caching with OS page size and take a unified approach to address translation and cache tag management. To this end, we introduce cache-map TLB (cTLB), which stores virtual-to-cache, instead of virtual-to-physical, address mappings. At a TLB miss, the TLB miss handler allocates the requested block into the cache if it is not cached yet, and updates both the page table and cTLB with the virtual-to-cache address mapping. Assuming the availability of large in-package DRAM caches, this ensures that an access to the memory region within the TLB reach always hits in the cache with low hit latency since a TLB access immediately returns the exact location of the requested block in the cache, hence saving a tag-checking operation. The remaining cache space is used as victim cache for memory pages that are recently evicted from cTLB. By completely eliminating data structures for cache tag management, from either on-die SRAM or in-package DRAM, the proposed DRAM cache achieves best scalability and hit latency, while maintaining high hit rate of a fully associative cache. Our evaluation with 3D Through-Silicon Via (TSV)-based in-package DRAM demonstrates that the proposed cache improves the IPC and energy efficiency by 30.9% and 39.5%, respectively, compared to the baseline with no DRAM cache. These numbers translate to 4.3% and 23.8% improvements over an impractical SRAM-tag cache requiring megabytes of on-die SRAM storage, due to low hit latency and zero energy waste for cache tags.
Year
DOI
Venue
2015
10.1145/2749469.2750383
International Symposium on Computer Architecture
Keywords
Field
DocType
fully associative tagless DRAM cache,tagless cache architecture,die-stacked DRAM cache,cache tag array,virtual-to-physical address translation,physical-to-cache address translation,caching granularity alignment,OS page size,cache tag management,cache-map TLB,virtual-to-cache address mapping storage,TLB miss handler,memory access,tag-checking operation,cache space,memory pages,data structure,hit latency,3D through-silicon via-based in-package DRAM cache,3D TSV-based in-package DRAM cache,IPC,energy efficiency,on-die SRAM storage,energy waste
Cache invalidation,Cache pollution,Cache,Computer science,CPU cache,Parallel computing,Page cache,Cache algorithms,Cache coloring,Smart Cache,Operating system
Conference
Volume
Issue
ISSN
43
3S
0163-5964
Citations 
PageRank 
References 
25
0.76
20
Authors
7
Name
Order
Citations
PageRank
Yongjun Lee1393.49
Jongwon Kim21042153.38
Hakbeom Jang3250.76
Hyunggyun Yang4250.76
Jangwoo Kim544735.38
Jinkyu Jeong6250.76
Jae W. Lee7250.76