Title
Prediction in Intelligence: An Empirical Comparison of Off-policy Algorithms on Robots
Abstract
The ability to continually make predictions about the world may be central to intelligence. Off-policy learning and general value functions (GVFs) are well-established algorithmic techniques for learning about many signals while interacting with the world. In the past couple of years, many ambitious works have used off-policy GVF learning to improve control performance in both simulation and robotic control tasks. Many of these works use semi-gradient temporal-difference (TD) learning algorithms, like Q-learning, which are potentially divergent. In the last decade, several TD learning algorithms have been proposed that are convergent and computationally efficient, but not much is known about how they perform in practice, especially on robots. In this work, we perform an empirical comparison of modern off-policy GVF learning algorithms on three different robot platforms, providing insights into their strengths and weaknesses. We also discuss the challenges of conducting fair comparative studies of off-policy learning on robots and develop a new evaluation methodology that is successful and applicable to a relatively complicated robot domain.
Year
DOI
Venue
2019
10.5555/3306127.3331711
adaptive agents and multi-agents systems
Keywords
Field
DocType
artificial intelligence,robotics,reinforcement learning,off-policy learning,temporal-difference learning,general value functions
Empirical comparison,Temporal difference learning,Computer science,Robotic control,Algorithm,Artificial intelligence,Robot,Strengths and weaknesses,Machine learning,Robotics,Reinforcement learning
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Banafsheh Rafiee100.34
Sina Ghiassian242.49
Adam White313818.56
Richard S. Sutton461001436.83