Title
Classifying and alleviating the communication overheads in matrix computations on large-scale NUMA multiprocessors
Abstract
Large-scale, shared-memory multiprocessors have non-uniform memory access (NUMA) costs. The high communication cost dominates the source of matrix computations' execution. Memory contention and remote memory access are two major communication overheads on large-scale NUMA multiprocessors. However, previous experiments and discussions focus either on reducing the number of remote memory accesses or on alleviating memory contention overhead. In this paper, we propose a simple but effective processor allocation policy, called rectangular processor allocation, to alleviate both overheads at the same time. The policy divides the matrix elements into a certain number of rectangular blocks, and assigns each processor to compute the results of one rectangular block. This methodology may reduce a lot of unnecessary memory accesses to the memory modules. After running many matrix computations under a realistic memory system simulator, we confirmed that at least one-fourth of the communication overhead map be reduced. Therefore, we conclude that rectangular processor allocation policy performs better than other popular policies, and that the combination of rectangular processor allocation policy with software interleaving data allocation policy is a better choice to alleviate communication overhead. (C) 1998 Elsevier Science Inc. All rights reserved.
Year
DOI
Venue
1998
10.1016/S0164-1212(98)10040-7
Journal of Systems and Software
Keywords
Field
DocType
large-scale numa multiprocessors,communication overhead,matrix computation,non uniform memory access
Registered memory,Interleaved memory,Uniform memory access,Computer science,Parallel computing,Cache-only memory architecture,Distributed memory,Real-time computing,Memory management,Non-uniform memory access,Memory map,Distributed computing
Journal
Volume
Issue
ISSN
44
1
0164-1212
Citations 
PageRank 
References 
0
0.34
9
Authors
3
Name
Order
Citations
PageRank
Yi-Min Wang100.34
Hsiao-Hsi Wang26813.06
Ruei-Chuan Chang326756.19