Title
An Analysis of Experience Replay in Temporal Difference Learning
Abstract
Temporal difference (TD) methods are used by reinforcement learning algorithms for predicting future rewards. This article analyzes theoretically and illustrates experimentally the effects of performing TD(lambda) prediction udpates backwards for a number of past experiences. More exactly, two related techniques described in the literature are examined, referred to as replayed TD and backwards TD. The former is essentially an online learning method which performs at each time step a regular TD(0) update, and then replays updates backwards for a number of previous states. The latter operates in offline mode, after the end of a trial updating backwards the predictions for all visited states. They are both shown to be approximately equivalent to TD(lambda) with variable lambda values selected in a particular way. This is true even if they perform only TD(0) updates. The experimental results show that replayed TD(0) is competitive to TD(lambda) with regard to learning speed and quality.
Year
DOI
Venue
1999
10.1080/019697299125127
CYBERNETICS AND SYSTEMS
Keywords
Field
DocType
reinforcement learning,temporal difference,temporal difference learning
Online learning,Temporal difference learning,Computer science,Artificial intelligence,Machine learning,Reinforcement learning,Lambda
Journal
Volume
Issue
ISSN
30.0
5
0196-9722
Citations 
PageRank 
References 
7
0.67
8
Authors
1
Name
Order
Citations
PageRank
Pawel Cichosz1176.16