Title
Processing Markov Logic Networks With Gpus: Accelerating Network Grounding
Abstract
Markov Logic is an expressive and widely used knowledge representation formalism that combines logic and probabilities, providing a powerful framework for inference and learning tasks. Most Markov Logic implementations perform inference by transforming the logic representation into a set of weighted propositional formulae that encode a Markov network, the ground Markov network. Probabilistic inference is then performed over the grounded network.Constructing, simplifying, and evaluating the network are the main steps of the inference phase. As the size of a Markov network can grow rather quickly, Markov Logic Network (MLN) inference can become very expensive, motivating a rich vein of research on the optimization of MLN performance. We claim that parallelism can have a large role on this task. Namely, we demonstrate that widely available Graphics Processing Units (GPUs) can be used to improve the performance of a state-of-the-art MLN system, Tuffy, with minimal changes. Indeed, comparing the performance of our GPU-based system, TuGPU, to that of the Alchemy, Tuffy and RockIt systems on three widely used applications shows that TuGPU is up to 15x times faster than the other systems.
Year
DOI
Venue
2015
10.1007/978-3-319-40566-7_9
INDUCTIVE LOGIC PROGRAMMING, ILP 2015
Keywords
Field
DocType
Statistical relational learning, Markov logic, Markov logic networks, Datalog, Parallel computing, GPUs
ENCODE,Knowledge representation and reasoning,Statistical relational learning,Inference,Computer science,Markov chain,Theoretical computer science,Implementation,Artificial intelligence,Formalism (philosophy),Datalog,Machine learning
Conference
Volume
ISSN
Citations 
9575
0302-9743
0
PageRank 
References 
Authors
0.34
25
4
Name
Order
Citations
PageRank
Carlos Alberto Martinez-Angeles131.04
Inês Dutra26110.35
Vítor Santos Costa388074.70
Jorge Buenabad-Chávez4367.10