Abstract | ||
---|---|---|
The massive parallel architecture enables graphics processing units (GPUs) to boost performance for a wide range of applications. Initially, GPUs only employ scratchpad memory as on-chip memory. Recently, to broaden the scope of applications that can be accelerated by GPUs, GPU vendors have used caches as on-chip memory in the new generations of GPUs. Unfortunately, GPU caches face many performanc... |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/TCAD.2017.2764886 | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |
Keywords | Field | DocType |
Instruction sets,Graphics processing units,Pipelines,Computer architecture,System-on-chip,Parallel processing,Registers | System on a chip,Instruction set,Scheduling (computing),Computer science,Cache,Scratchpad memory,Parallel computing,Cache algorithms,Thread (computing),Real-time computing,Speedup | Journal |
Volume | Issue | ISSN |
37 | 8 | 0278-0070 |
Citations | PageRank | References |
3 | 0.39 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yun Liang | 1 | 868 | 59.55 |
Xiaolong Xie | 2 | 146 | 9.07 |
Yu Wang | 3 | 2279 | 211.60 |
Guangyu Sun | 4 | 1920 | 111.55 |
Tao Wang | 5 | 44 | 8.54 |