Title
Importance Sampling Actor-Critic Algorithms
Abstract
Importance Sampling (IS) and actor-critic are two methods which have been used to reduce the variance of gradient estimates in policy gradient optimization methods. We show how IS can be used with Temporal Difference methods to estimate a cost function parameter for one policy using the entire history of system interactions incorporating many different policies. The resulting algorithm is then applied to improving gradient estimates in a policy gradient optimization. The empirical results demonstrate a 20-40x reduction in variance over the IS estimator for an example queueing problem, resulting in a similar factor of improvement in convergence for a gradient search.
Year
DOI
Venue
2006
10.1109/ACC.2006.1656451
2006 AMERICAN CONTROL CONFERENCE, VOLS 1-12
Keywords
DocType
Volume
computational modeling,temporal difference,history,estimation,stochastic processes,estimation theory,parameter estimation,monte carlo methods,importance sampling,approximation algorithms,function approximation,cost function
Conference
1-12
ISSN
Citations 
PageRank 
0743-1619
2
0.45
References 
Authors
3
3
Name
Order
Citations
PageRank
Jason L. Williams121715.34
John W. Fisher III287874.44
Alan S. Willsky37466847.01