Title
Hindsight Value Function for Variance Reduction in Stochastic Dynamic Environment.
Abstract
Policy gradient methods are appealing in deep reinforcement learning but suffer from high variance of gradient estimate. To reduce the variance, the state value function is applied commonly. However, the effect of the state value function becomes limited in stochastic dynamic environments, where the unexpected state dynamics and rewards will increase the variance. In this paper, we propose to replace the state value function with a novel hindsight value function, which leverages the information from the future to reduce the variance of the gradient estimate for stochastic dynamic environments. Particularly, to obtain an ideally unbiased gradient estimate, we propose an information-theoretic approach, which optimizes the embeddings of the future to be independent of previous actions. In our experiments, we apply the proposed hindsight value function in stochastic dynamic environments, including discrete-action environments and continuous-action environments. Compared with the standard state value function, the proposed hindsight value function consistently reduces the variance, stabilizes the training, and improves the eventual policy.
Year
DOI
Venue
2021
10.24963/ijcai.2021/341
IJCAI
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
9
Name
Order
Citations
PageRank
Jiaming Guo112.38
Rui Zhang201.01
Xishan Zhang372.90
Shaohui Peng400.34
Qi Yi500.34
Zidong Du657429.68
Xing Hu701.69
Qi Guo871634.09
Yunji Chen9143279.99