Abstract | ||
---|---|---|
Parallel batched data structures are designed to process synchronized batches of operations in a parallel computing model. In this paper, we propose parallel combining, a technique that implements a concurrent data structure from a parallel batched one. The idea is that we explicitly synchronize concurrent operations into batches: one of the processes becomes a combiner which collects concurrent requests and initiates a parallel batched algorithm involving the owners (clients) of the collected requests. Intuitively, the cost of synchronizing the concurrent calls can be compensated by running the parallel batched algorithm. We validate the intuition via two applications of parallel combining. First, we use our technique to design a concurrent data structure optimized for read-dominated workloads, taking a dynamic graph data structure as an example. Second, we use a novel parallel batched priority queue to build a concurrent one. In both cases, we obtain performance gains with respect to the state-of-the-art algorithms. |
Year | DOI | Venue |
---|---|---|
2018 | 10.4230/LIPIcs.OPODIS.2018.11 | OPODIS |
Field | DocType | Citations |
Data structure,Synchronization,Computer science,Synchronizing,Intuition,Priority queue,Concurrent data structure,Graph (abstract data type),Distributed computing | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Vitaly Aksenov | 1 | 0 | 0.68 |
Petr Kuznetsov | 2 | 253 | 30.43 |
Anatoly Shalyto | 3 | 98 | 20.06 |