Abstract | ||
---|---|---|
Prefetching technique is an effective approach to mitigate a well-known problem in multicore processors: the gap between computing and data access performance. Data prefetching goal is to anticipate data to CPU by retrieving data from memory and loading it into cache memory before the CPU request, reducing the miss rate and processor's penalty. In NoC-based multiprocessor systems, prefetching efficiency is even more critical to system performance, since memory access time depends on distance between the requesting processor and memory storing data, and also on the network traffic. This work proposes a temporized data prefetching mechanism that aims to minimize penalty in NoC-based multiprocessor. The proposed technique uses a proactive process initiated by the requesting processor to prefetch data from memory and load into local cache. The time to prefetch data is predicted based on cache miss history of each processor and NoC's traffic information. In experiments with 16 cores, the proposed algorithm successfully reduced the processors penalty in 6.25% on average and up to 29% when compared to an event-based technique. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1109/ASAP.2016.7760805 | 2016 IEEE 27th International Conference on Application-specific Systems, Architectures and Processors (ASAP) |
Keywords | Field | DocType |
Prefetching,Network-on-Chip,Multiprocessor,Cache Coherence,Time Series | Cache,Computer science,CPU cache,Real-time computing,Multi-core processor,Central processing unit,Access time,Parallel computing,Algorithm,Distributed memory,Multiprocessing,Instruction prefetch,Embedded system | Conference |
ISSN | ISBN | Citations |
2160-0511 | 978-1-5090-1504-7 | 0 |
PageRank | References | Authors |
0.34 | 4 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Maria Cireno | 1 | 3 | 0.82 |
Andre Aziz | 2 | 0 | 0.68 |
Edna Barros | 3 | 21 | 4.99 |