Effect of Hyper-Threading in Latency-Critical Multithreaded Cloud Applications and Utilization Analysis of the Major System Resources | 0 | 0.34 | 2022 |
VMT: Virtualized Multi-Threading for Accelerating Graph Workloads on Commodity Processors | 0 | 0.34 | 2022 |
Bandwidth-Aware Dynamic Prefetch Configuration for IBM POWER8. | 1 | 0.35 | 2020 |
Phase-Aware Cache Partitioning to Target Both Turnaround Time and System Performance. | 1 | 0.35 | 2020 |
Thread Isolation to Improve Symbiotic Scheduling on SMT Multicore Processors. | 0 | 0.34 | 2020 |
An efficient cache flat storage organization for multithreaded workloads for low power processors | 0 | 0.34 | 2020 |
An Aging-Aware GPU Register File Design Based on Data Redundancy. | 0 | 0.34 | 2019 |
Efficient Management of Cache Accesses to Boost GPGPU Memory Subsystem Performance | 0 | 0.34 | 2019 |
FOS: a low-power cache organization for multicores | 0 | 0.34 | 2019 |
Modeling and analysis of the performance of exascale photonic networks. | 0 | 0.34 | 2019 |
Way Combination for an Adaptive and Scalable Coherence Directory | 0 | 0.34 | 2019 |
Accurately modeling the on-chip and off-chip GPU memory subsystem. | 4 | 0.44 | 2018 |
Improving System Turnaround Time with Intel CAT by Identifying LLC Critical Applications. | 0 | 0.34 | 2018 |
A Workload Generator for Evaluating SMT Real-Time Systems | 0 | 0.34 | 2018 |
Improving GPU Cache Hierarchy Performance with a Fetch and Replacement Cache. | 0 | 0.34 | 2018 |
Workload Characterization for Exascale Computing Networks | 0 | 0.34 | 2018 |
Designing lab sessions focusing on real processors for computer architecture courses: A practical perspective. | 0 | 0.34 | 2018 |
Modeling a Photonic Network for Exascale Computing | 1 | 0.40 | 2017 |
Application Clustering Policies to Address System Fairness with Intel’s Cache Allocation Technology | 2 | 0.36 | 2017 |
A Hardware Approach to Fairly Balance the Inter-Thread Interference in Shared Caches. | 0 | 0.34 | 2017 |
Perf&Fair: A Progress-Aware Scheduler to Enhance Performance and Fairness in SMT Multicores. | 3 | 0.39 | 2017 |
Exploiting Data Compression to Mitigate Aging in GPU Register Files | 0 | 0.34 | 2017 |
Improving IBM POWER8 Performance Through Symbiotic Job Scheduling. | 0 | 0.34 | 2017 |
On Microarchitectural Mechanisms for Cache Wearout Reduction. | 4 | 0.46 | 2017 |
A research-oriented course on Advanced Multicore Architecture: Contents and active learning methodologies. | 1 | 0.41 | 2017 |
Student Research Poster: A Low Complexity Cache Sharing Mechanism to Address System Fairness. | 0 | 0.34 | 2016 |
A dynamic execution time estimation model to save energy in heterogeneous multicores running periodic tasks. | 4 | 0.40 | 2016 |
Impact of Memory-Level Parallelism on the Performance of GPU Coherence Protocols | 0 | 0.34 | 2016 |
Enhancing the L1 Data Cache Design to Mitigate HCI | 2 | 0.37 | 2016 |
Bandwidth-Aware On-Line Scheduling in SMT Multicores | 3 | 0.41 | 2016 |
Current Challenges In Simulations Of Hpc Systems | 0 | 0.34 | 2015 |
A reuse-based refresh policy for energy-aware eDRAM caches. | 1 | 0.34 | 2015 |
Addressing Fairness in SMT Multicores with a Progress-Aware Scheduler | 7 | 0.43 | 2015 |
Accurately modeling the GPU memory subsystem | 2 | 0.43 | 2015 |
A Research-Oriented Course on Advanced Multicore Architecture | 0 | 0.34 | 2015 |
Addressing bandwidth contention in SMT multicores through scheduling | 1 | 0.35 | 2014 |
Combining RAM technologies for hard-error recovery in L1 data caches working at very-low power modes | 5 | 0.43 | 2013 |
L1-bandwidth aware thread allocation in multicore SMT processors | 11 | 0.68 | 2013 |
Exploiting reuse information to reduce refresh energy in on-chip eDRAM caches | 2 | 0.36 | 2013 |
Power-aware scheduling with effective task migration for real-time multicore embedded systems. | 10 | 0.53 | 2013 |
Using Huge Pages And Performance Counters To Determine The Llc Architecture | 0 | 0.34 | 2013 |
Impact on performance and energy of the retention time and processor frequency in L1 macrocell-based data caches | 0 | 0.34 | 2012 |
A cost-effective heuristic to schedule local and remote memory in cluster computers | 3 | 0.40 | 2012 |
Understanding Cache Hierarchy Contention in CMPs to Improve Job Scheduling | 4 | 0.41 | 2012 |
Combining recency of information with selective random and a victim cache in last-level caches | 4 | 0.55 | 2012 |
OMHI 2012: first international workshop on on-chip memory hierarchies and interconnects: organization, management and implementation | 0 | 0.34 | 2012 |
Page-Based Memory Allocation Policies of Local and Remote Memory in Cluster Computers | 0 | 0.34 | 2012 |
A New Energy-Aware Dynamic Task Set Partitioning Algorithm for Soft and Hard Embedded Real-Time Systems | 8 | 0.71 | 2011 |
A cluster computer performance predictor for memory scheduling | 0 | 0.34 | 2011 |
Improving Last-Level Cache Performance by Exploiting the Concept of MRU-Tour | 0 | 0.34 | 2011 |