Title
Pretraining Deep Actor-Critic Reinforcement Learning Algorithms With Expert Demonstrations.
Abstract
Pretraining with expert demonstrations have been found useful in speeding up the training process of deep reinforcement learning algorithms since less online simulation data is required. Some people use supervised learning to speed up the process of feature learning, others pretrain the policies by imitating expert demonstrations. However, these methods are unstable and not suitable for actor-critic reinforcement learning algorithms. Also, some existing methods rely on the global optimum assumption, which is not true in most scenarios. In this paper, we employ expert demonstrations in a actor-critic reinforcement learning framework, and meanwhile ensure that the performance is not affected by the fact that expert demonstrations are not global optimal. We theoretically derive a method for computing policy gradients and value estimators with only expert demonstrations. Our method is theoretically plausible for actor-critic reinforcement learning algorithms that pretrains both policy and value functions. We apply our method to two of the typical actor-critic reinforcement learning algorithms, DDPG and ACER, and demonstrate with experiments that our method not only outperforms the RL algorithms without pretraining process, but also is more simulation efficient.
Year
Venue
Field
2018
arXiv: Artificial Intelligence
Computer science,Global optimum,Algorithm,Supervised learning,Artificial intelligence,Feature learning,Machine learning,Speedup,Reinforcement learning,Estimator
DocType
Volume
Citations 
Journal
abs/1801.10459
3
PageRank 
References 
Authors
0.38
13
2
Name
Order
Citations
PageRank
Xiaoqin Zhang1389.23
Huimin Ma219729.49