Title
Gaussian Processes for Data-Efficient Learning in Robotics and Control
Abstract
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
Year
DOI
Venue
2015
10.1109/TPAMI.2013.218
Pattern Analysis and Machine Intelligence, IEEE Transactions  
Keywords
DocType
Volume
bayesian inference,gaussian processes,policy search,control,reinforcement learning,robotics,predictive models,computational modeling,data models,probabilistic logic,robots,uncertainty
Journal
37
Issue
ISSN
Citations 
2
0162-8828
101
PageRank 
References 
Authors
3.62
34
3
Search Limit
100101
Name
Order
Citations
PageRank
Marc Peter Deisenroth1109564.71
Dieter Fox2123061289.74
carl edward rasmussen32628309.77