Title
Cache Management with Partitioning-Aware Eviction and Thread-Aware Insertion/Promotion Policy
Abstract
With recent advances of processor technology, the LRU based shared last-level cache (LLC) has been widely employed in modern Chip Multi-processors (CMP). However, past research [1,2,8,9] indicates that the cache performance of the LLC and further of the CMP processors may be degraded severely by LRU under the occurrence of the inter-thread interference or the excess of the working set size over the cache size. Existing approaches tackling this performance degradation problem have limited improvement of an overall cache performance because they usually focus on a single type of memory access behavior and thus lack full consideration of tradeoffs among different types of memory access behaviors. In this paper, we propose a unified cache management policy called Partitioning-Aware Eviction and Thread-aware Insertion/Promotion policy (PAE-TIP) that can effectively enhance capacity management, adaptive insertion/promotion, and further improve the overall cache performance. Specifically, PAE-TIP employs an adaptive mechanism to decide the position where to put the incoming lines or to move the hit lines, and chooses a victim line based on the target partitioning given by utility-based cache partitioning (UCP) [2]. In our study, we show that PAE-TIP can cover a variety of memory access behaviors simultaneously and provide a good tradeoff for overall cache performance improvement while retaining competitively low hardware and design overhead. The evaluation conducted on 4-way CMPs shows that the PAE-TIP-managed LLC can improve overall performance by19.3% on average over the LRU policy. Furthermore, the performance benefit of PAE-TIP is 1.09x compared to PIPP, 1.11x compared to TADIP and 1.12x compared to UCP.
Year
DOI
Venue
2010
10.1109/ISPA.2010.45
ISPA
Keywords
Field
DocType
last-level cache,overall performance by19,overall cache performance improvement,thread-aware insertion,memory access behavior,overall cache performance,utility-based cache partitioning,performance benefit,cache size,unified cache management policy,promotion policy,cache management,cache performance,partitioning-aware eviction,shared cache,art,capacity management,throughput,benchmark testing,promotion,instruction sets,measurement,insertion
Cache invalidation,Cache pollution,Computer science,CPU cache,Cache,Real-time computing,Cache algorithms,Cache coloring,Smart Cache,Adaptive replacement cache,Distributed computing
Conference
Citations 
PageRank 
References 
1
0.36
14
Authors
6
Name
Order
Citations
PageRank
Junmin Wu132.77
Xiufeng Sui2275.83
Yixuan Tang320.72
Xiaodong Zhu472.87
Jing Wang510.36
Guoliang Chen630546.48