Title
On Parallelizing Streaming Algorithms.
Abstract
We study the complexity of parallelizing streaming algorithms (or equivalently, branching programs). If M(f) denotes the minimum average memory required to compute a function f(x1, x2, . . . , xn) how much memory is required to compute f on k independent streams that arrive in parallel? We show that when the inputs (updates) are sampled independently from some domain X and M(f) = Ω(n), then computing the value of f on k streams requires average memory at least Ω ( k · M(f) n ) . Our results are obtained by defining new ways to measure the information complexity of streaming algorithms. We define two such measures: the transitional and cumulative information content. We prove that any streaming algorithm with transitional information content I can be simulated using average memory O(n(I+ 1)). On the other hand, if every algorithm with cumulative information content I can be simulated with average memory O(I + 1), then computing f on k streams requires average memory at least Ω(k · (M(f)− 1)).
Year
Venue
Field
2015
Electronic Colloquium on Computational Complexity (ECCC)
Discrete mathematics,Combinatorics,Streaming algorithm,Average memory,Information complexity,Mathematics,Branching (version control)
DocType
Volume
Citations 
Journal
22
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Anup Rao158132.80
Makrand Sinha212.04