Name
Playground
About
FAQ
GitHub
Playground
Shortest Path Finder
Community Detector
Connected Papers
Author Trending
Ke Liu
Daniel P. Kennedy
Roland Zumkeller
Maximilian Dürr
Dan Graur
Liangliang Shang
Chen Ma
Barbara Aquilani
Aditya Kale
Lawrie Brown
Home
/
Author
/
ONUR KAYIRAN
Author Info
Open Visualization
Name
Affiliation
Papers
ONUR KAYIRAN
The Pennsylvania State University, University Park, PA, USA
23
Collaborators
Citations
PageRank
73
356
13.47
Referers
Referees
References
747
1294
655
Search Limit
100
1000
Publications (23 rows)
Collaborators (73 rows)
Referers (100 rows)
Referees (100 rows)
Title
Citations
PageRank
Year
Analyzing and Leveraging Decoupled L1 Caches in GPUs
1
0.34
2021
Analyzing and Leveraging Shared L1 Caches in GPUs
0
0.34
2020
Quantifying Data Locality in Dynamic Parallelism in GPUs
0
0.34
2019
Analyzing and Leveraging Remote-Core Bandwidth for Enhanced Performance in GPUs
2
0.36
2019
Opportunistic computing in GPU architectures
6
0.40
2019
Lost in Abstraction: Pitfalls of Analyzing GPUs at the Intermediate Language Level
4
0.40
2018
Holistic Management of the GPGPU Memory Hierarchy to Manage Warp-level Latency Tolerance.
1
0.34
2018
Architectural Support for Efficient Large-Scale Automata Processing.
2
0.35
2018
CODA: Enabling Co-location of Computation and Data for Multiple GPU Systems.
4
0.39
2018
Efficient and Fair Multi-programming in GPUs via Effective Bandwidth Management
6
0.39
2018
Modular Routing Design for Chiplet-Based Systems.
3
0.41
2018
Design and Analysis of an APU for Exascale Computing
11
0.56
2017
There and Back Again: Optimizing the Interconnect in Networks of Memory Cubes.
0
0.34
2017
Controlled Kernel Launch for Dynamic Parallelism in GPUs
10
0.50
2017
CODA: Enabling Co-location of Computation and Data for Near-Data Processing.
0
0.34
2017
Scheduling Techniques for GPU Architectures with Processing-In-Memory Capabilities.
52
0.88
2016
Prefetching Techniques for Near-memory Throughput Processors.
6
0.42
2016
OSCAR: Orchestrating STT-RAM cache traffic for heterogeneous CPU-GPU architectures.
4
0.41
2016
μC-States: Fine-grained GPU Datapath Power Management.
13
0.45
2016
Anatomy of GPU Memory System for Multi-Application Execution
26
0.68
2015
Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance.
28
0.58
2015
Neither more nor less: optimizing thread-level parallelism for GPGPUs
90
2.35
2013
Orchestrated scheduling and prefetching for GPGPUs
87
1.89
2013
1