Abstract | ||
---|---|---|
With the rate of computing power growing much faster than that of storage I/O access, parallel applications suffer more from I/O latency. I/O prefetching is effective in hiding I/O latency. However, existing I/O prefetching techniques are conservative and their effectiveness is limited. Recently, a more aggressive prefetching approach named pre-execution prefetching [19] has been proposed. In this paper, we first identify the drawback of this pre-execution prefetching approach, and then propose a new method to overcome the drawback by scheduling the I/O operations between the main thread and the prefetching thread. By careful I/O scheduling, our approach further extends the computation and I/O concurrency and avoids the I/O competition within one process. The results of extensive experiments, including experiments on real-life applications such as big matrix manipulation and Hill encryption, demonstrate the benefits of the proposed approach. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1007/978-3-642-38750-0_30 | Lecture Notes in Computer Science |
Field | DocType | Volume |
I/O scheduling,Concurrency,Latency (engineering),Computer science,Scheduling (computing),Parallel computing,Matrix manipulation,Encryption,Thread (computing),Computation | Conference | 7905 |
ISSN | Citations | PageRank |
0302-9743 | 3 | 0.44 |
References | Authors | |
15 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yue Zhao | 1 | 8 | 2.23 |
Kenji Yoshigoe | 2 | 84 | 13.88 |
Mengjun Xie | 3 | 212 | 23.46 |