Title
Tight competitive ratios for parallel disk prefetching and caching
Abstract
We consider the natural extension of the well-known single disk caching problem to the parallel disk I/O model (PDM) [17]. The main challenge is to achieve as much parallelism as possible and avoid I/O bottlenecks. We are given a fast memory (cache) of size M memory blocks along with a request sequence Σ =(b1,b2,...,bn) where each block bi resides on one of D disks. In each parallel I/O step, at most one block from each disk can be fetched. The task is to serve Σ in the minimum number of parallel I/Os. Thus, each I/O is analogous to a page fault. The difference here is that during each page fault, up to D blocks can be brought into memory, as long as all of the new blocks entering the memory reside on different disks. The problem has a long history [18, 12, 13, 26]. Note that this problem is non-trivial even if all requests in Σ are unique. This restricted version is called read-once. Despite the progress in the offline version [13, 15] and read-once version [12], the general online problem still remained open. Here, we provide comprehensive results with a full general solution for the problem with asymptotically tight competitive ratios. To exploit parallelism, any parallel disk algorithm needs a certain amount of lookahead into future requests. To provide effective caching, an online algorithm must achieve o(D) competitive ratio. We show a lower bound that states, for lookahead L ≤ M, any online algorithm must be Ω(D)-competitive. For lookahead L greater than M(1+1/ε), where ε is a constant, the tight upper bound of O(√MD/L) on competitive ratio is achieved by our algorithm SKEW. The previous algorithm tLRU [26] was O((MD/L)2/3)-competitive and this was also shown to be tight [26] for an LRU-based strategy. We achieve the tight ratio using a fairly different strategy than LRU. We also show tight results for randomized algorithms against oblivious adversary and give an algorithm achieving better bounds in the resource augmentation model.
Year
DOI
Venue
2008
10.1145/1378533.1378593
SPAA
Keywords
Field
DocType
algorithm skew,parallel disk prefetching,o bottleneck,competitive ratio,online algorithm,lookahead l,tight competitive ratio,o step,previous algorithm tlru,parallel disk algorithm,page fault,o model,upper bound,randomized algorithm,lower bound,competitive analysis,online algorithms
Online algorithm,Randomized algorithm,Computer science,Cache,Upper and lower bounds,Parallel computing,Skew,Page fault,Distributed computing,Competitive analysis
Conference
Citations 
PageRank 
References 
1
0.34
22
Authors
4
Name
Order
Citations
PageRank
Wing-Kai Hon185678.67
Rahul Shah2105961.31
Peter J. Varman370083.23
Jeffrey Scott Vitter466281246.72