Title
Learn a Prior for RHEA for Better Online Planning.
Abstract
Rolling Horizon Evolutionary Algorithms (RHEA) are a class of online planning methods for real-time game playing; their performance is closely related to the planning horizon and the search time allowed. In this paper, we propose to learn a prior for RHEA in an offline manner by training a value network and a policy network. The value network is used to reduce the planning horizon by providing an estimation of future rewards, and the policy network is used to initialize the population, which helps to narrow down the search scope. The proposed algorithm, named prior-based RHEA (p-RHEA), trains policy and value networks by performing planning and learning iteratively. In the planning stage, the horizon-limited search assisted with the policy network and value network is performed to improve the policies and collect training samples. In the learning stage, the policy network and value network are trained with the collected samples to learn better prior knowledge. Experimental results on OpenAI Gym MuJoCo tasks show that the performance of the proposed p-RHEA is significantly improved compared to that of RHEA.
Year
Venue
DocType
2019
arXiv: Artificial Intelligence
Journal
Volume
Citations 
PageRank 
abs/1902.05284
0
0.34
References 
Authors
16
3
Name
Order
Citations
PageRank
Xin Tong12119127.72
Weiming Liu202.03
Bin Li36827.40