Title
Massively Parallel Computation Via Remote Memory Access
Abstract
We introduce the Adaptive Massively Parallel Computation (AMPC) model, which is an extension of the Massively Parallel Computation (MPC) model. At a high level, the AMPC model strengthens the MPC model by storing all messages sent within a round in a distributed data store. In the following round, all machines are provided with random read access to the data store, subject to the same constraints on the total amount of communication as in the MPC model. Our model is inspired by the previous empirical studies of distributed graph algorithms [8, 30] using MapReduce and a distributed hash table service [17].This extension allows us to give new graph algorithms with much lower round complexities compared to the best-known solutions in the MPC model. In particular, in the AMPC model we show how to solve maximal independent set in O(1) rounds and connectivity/minimum spanning tree in O(log log(m/n) n) rounds both using O(n(delta)) space per machine for constant delta < 1. In the same memory regime for MPC, the best-known algorithms for these problems require poly log n rounds. Our results imply that the 2-Cycle conjecture, which is widely believed to hold in the MPC model, does not hold in the AMPC model.
Year
DOI
Venue
2021
10.1145/3470631
ACM TRANSACTIONS ON PARALLEL COMPUTING
Keywords
DocType
Volume
Datasets, neural networks, gaze detection, text tagging
Journal
8
Issue
ISSN
Citations 
3
2329-4949
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Soheil Behnezhad12411.02
Laxman Dhulipala2786.51
Hossein Esfandiari38815.38
Jakub Lacki47512.67
VAHAB S. MIRROKNI54309287.14
Warren Schudy611.37