Title
Hessian Aided Policy Gradient
Abstract
Reducing the variance of estimators for policy gradient has long been the focus of reinforcement learning research. While classic algorithms like REINFORCE find an $\epsilon$-approximate first-order stationary point in $\OM({1}/{\epsilon^4})$ random trajectory simulations, no provable improvement on the complexity has been made so far. This paper presents a Hessian aided policy gradient method with the first improved sample complexity of $\OM({1}/{\epsilon^3})$. While our method exploits information from the policy Hessian, it can be implemented in linear time with respect to the parameter dimension and is hence applicable to sophisticated DNN parameterization. Simulations on standard tasks validate the efficiency of our method.
Year
Venue
Field
2019
international conference on machine learning
Pattern recognition,Computer science,Hessian matrix,Artificial intelligence,Machine learning
DocType
Citations 
PageRank 
Conference
2
0.35
References 
Authors
0
5
Name
Order
Citations
PageRank
Zebang Shen1179.36
Alejandro Ribeiro22817221.08
Seyed Hamed Hassani315122.04
Hui Qian45913.26
Chao Mi581.87