Title
Memory access patterns: the missing piece of the multi-GPU puzzle
Abstract
With the increased popularity of multi-GPU nodes in modern HPC clusters, it is imperative to develop matching programming paradigms for their efficient utilization. In order to take advantage of the local GPUs and the low-latency high-throughput interconnects that link them, programmers need to meticulously adapt parallel applications with respect to load balancing, boundary conditions and device synchronization. This paper presents MAPS-Multi, an automatic multi-GPU partitioning framework that distributes the workload based on the underlying memory access patterns. The framework consists of host- and device-level APIs that allow programs to efficiently run on a variety of GPU and multi-GPU architectures. The framework implements several layers of code optimization, device abstraction, and automatic inference of inter-GPU memory exchanges. The paper demonstrates that the performance of MAPS-Multi achieves near-linear scaling on fundamental computational operations, as well as real-world applications in deep learning and multivariate analysis.
Year
DOI
Venue
2015
10.1145/2807591.2807611
International Conference for High Performance Computing, Networking, Storage, and Analysis
Keywords
Field
DocType
Multi-GPU Programming, Memory Access Patterns
Program optimization,Uniform memory access,Programming paradigm,Computer science,Load balancing (computing),Instruction set,Parallel computing,Memory management,Memory map,CUDA Pinned memory,Distributed computing
Conference
ISBN
Citations 
PageRank 
978-1-5090-0273-3
14
0.61
References 
Authors
15
4
Name
Order
Citations
PageRank
Tal Ben-Nun111614.21
E. Levy2222.24
Amnon Barak3590119.00
Eri Rubin4201.18