Title
M-Walk - Learning to Walk over Graphs using Monte Carlo Tree Search.
Abstract
Learning to walk over a graph towards a target node for a given query and a source node is an important problem in applications such as knowledge base completion (KBC). It can be formulated as a reinforcement learning (RL) problem with a known state transition model. To overcome the challenge of sparse rewards, we develop a graph-walking agent called M-Walk, which consists of a deep recurrent neural network (RNN) and Monte Carlo Tree Search (MCTS). The RNN encodes the state (i.e., history of the walked path) and maps it separately to a policy and Q-values. In order to effectively train the agent from sparse rewards, we combine MCTS with the neural policy to generate trajectories yielding more positive rewards. From these trajectories, the network is improved in an off-policy manner using Q-learning, which modifies the RNN policy via parameter sharing. Our proposed RL algorithm repeatedly applies this policy-improvement step to learn the model. At test time, MCTS is combined with the neural policy to predict the target node. Experimental results on several graph-walking benchmarks show that M-Walk is able to learn better policies than other RL-based methods, which are mainly based on policy gradients. M-Walk also outperforms traditional KBC baselines.
Year
Venue
Keywords
2018
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)
reinforcement learning,experimental results,monte carlo tree search
Field
DocType
Volume
Graph,Monte Carlo tree search,Computer science,Recurrent neural network,Artificial intelligence,Knowledge base,Machine learning,Reinforcement learning
Conference
31
ISSN
Citations 
PageRank 
1049-5258
5
0.39
References 
Authors
0
5
Name
Order
Citations
PageRank
Yelong Shen170935.97
Jianshu Chen288352.94
Po-Sen Huang392644.01
Yuqing Guo481.12
Jianfeng Gao55729296.43