Abstract | ||
---|---|---|
The ability to efficiently perform probabilistic inference task is critical to large scale applications in statistics and artificial intelligence. Dramatic speedup might be achieved by appropriately mapping the current inference algorithms to the parallel framework. Parallel exact inference methods still suffer from exponential complexity in the worst case. Approximate inference methods have been parallelized and good speedup is achieved. In this paper, we focus on a variant of Belief Propagation algorithm. This variant has better convergent property and is provably convergent under certain conditions. We show that this method is amenable to coarse-grained parallelization and propose techniques to optimally parallelize it without sacrificing convergence. Experiments on a shared memory systems demonstrate that near-ideal speedup is achieved with reasonable scalability. |
Year | Venue | Keywords |
---|---|---|
2011 | Canadian Conference on AI | dramatic speedup,parallel framework,parallel exact inference method,convergent approximate inference method,convergent property,probabilistic inference task,approximate inference method,provably convergent,near-ideal speedup,good speedup,current inference algorithm,graphical model,parallel algorithm |
Field | DocType | Volume |
Shared memory,Inference,Computer science,Parallel algorithm,Approximate inference,Artificial intelligence,Graphical model,Machine learning,Scalability,Speedup,Belief propagation | Conference | 6657.0 |
ISSN | Citations | PageRank |
0302-9743 | 1 | 0.36 |
References | Authors | |
14 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ming Su | 1 | 6 | 0.93 |
Elizabeth A. Thompson | 2 | 20 | 5.47 |