Title
Exploiting Parallelization on Address Translation: Shared Page Walk Cache.
Abstract
Current processors carry out a Page Table Walk to translate a virtual address into a physical address. This process requires several memory accesses to retrieve the entry from each of the levels of the Page Table Tree. As each main memory access has a large latency, the address translation process may penalize significantly the system performance. Translation Lookaside Buffers (TLBs) avoid completely these accesses by storing the most recent translations. But on a TLB miss a Page Table Walk is required. An effective way to accelerate it is introducing a cache inside/near the MMU to store partial translations. Recent works have shown the advantages of using a Page Walk Cache on the service of a single core. In this paper we propose a Page Walk Cache shared among the cores of a CMP and analyze its behavior and how it improves performance for parallel applications, comparing it to a private Page Walk Cache scheme. Evaluation results show that shared PTWC increases up to a 19.8% the hit-rate, leading to a TLB miss latency reduction of up to 12.2%, when compared to a private PTWC.
Year
DOI
Venue
2013
10.1007/978-3-642-54420-0_43
Lecture Notes in Computer Science
Keywords
Field
DocType
virtual memory,page walk,cache organization
Physical address,Computer science,Virtual memory,Latency (engineering),Cache,Virtual address space,Page table,Parallel computing,Cache algorithms,Translation lookaside buffer,Distributed computing
Conference
Volume
ISSN
Citations 
8374
0302-9743
1
PageRank 
References 
Authors
0.34
5
3
Name
Order
Citations
PageRank
Albert Esteve1122.17
María Engracia Gómez214917.48
Antonio Robles348130.40