Name
Affiliation
Papers
ONUR KAYIRAN
The Pennsylvania State University, University Park, PA, USA
23
Collaborators
Citations 
PageRank 
73
356
13.47
Referers 
Referees 
References 
747
1294
655
Search Limit
1001000
Title
Citations
PageRank
Year
Analyzing and Leveraging Decoupled L1 Caches in GPUs10.342021
Analyzing and Leveraging Shared L1 Caches in GPUs00.342020
Quantifying Data Locality in Dynamic Parallelism in GPUs00.342019
Analyzing and Leveraging Remote-Core Bandwidth for Enhanced Performance in GPUs20.362019
Opportunistic computing in GPU architectures60.402019
Lost in Abstraction: Pitfalls of Analyzing GPUs at the Intermediate Language Level40.402018
Holistic Management of the GPGPU Memory Hierarchy to Manage Warp-level Latency Tolerance.10.342018
Architectural Support for Efficient Large-Scale Automata Processing.20.352018
CODA: Enabling Co-location of Computation and Data for Multiple GPU Systems.40.392018
Efficient and Fair Multi-programming in GPUs via Effective Bandwidth Management60.392018
Modular Routing Design for Chiplet-Based Systems.30.412018
Design and Analysis of an APU for Exascale Computing110.562017
There and Back Again: Optimizing the Interconnect in Networks of Memory Cubes.00.342017
Controlled Kernel Launch for Dynamic Parallelism in GPUs100.502017
CODA: Enabling Co-location of Computation and Data for Near-Data Processing.00.342017
Scheduling Techniques for GPU Architectures with Processing-In-Memory Capabilities.520.882016
Prefetching Techniques for Near-memory Throughput Processors.60.422016
OSCAR: Orchestrating STT-RAM cache traffic for heterogeneous CPU-GPU architectures.40.412016
μC-States: Fine-grained GPU Datapath Power Management.130.452016
Anatomy of GPU Memory System for Multi-Application Execution260.682015
Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance.280.582015
Neither more nor less: optimizing thread-level parallelism for GPGPUs902.352013
Orchestrated scheduling and prefetching for GPGPUs871.892013