Title
Model-Based Reinforcement Learning For Robot Control
Abstract
Model-free deep reinforcement learning (MFRL) algorithms have achieved many impressive results. But they are generally stricken with high sample complexity, which puts forward a critical challenge for their application to real-world robots. Dynamic models are essential for robot control laws, but it is often hard to obtain accurate analytical dynamic models. Therefore a data-driven approach to learning models becomes significant for reinforcement learning to increase data efficiency. Model-based algorithms are effective methods to reduce sample complexity by learning the system dynamic model. However, in certain environments, it has been proven that learning an accurate system dynamic model is a formidable problem, and their asymptotic performance cannot achieve to the same level as model-free algorithms. In our work, we use an ensemble of deep neural networks to learn system dynamics and incorporate model uncertainty. Then in order to merge the high asymptotic performance of the advanced model-free methods, the deep deterministic policy gradient (DDPG) algorithm is adopted to optimize robot control policy. Furthermore, it has been implemented within ROS for controlling a Baxter robot in the simulation environment.
Year
DOI
Venue
2020
10.1109/ICARM49381.2020.9195341
2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM)
Keywords
DocType
ISBN
data-driven approach,learning models,model-based algorithms,system dynamic model,model-free algorithms,deep neural networks,system dynamics,model uncertainty,model-free methods,deep deterministic policy gradient algorithm,robot control policy,Baxter robot,model-based reinforcement,model-free deep reinforcement learning algorithms,real-world robots,robot control laws,analytical dynamic models,MFRL,DDPG,ROS
Conference
978-1-7281-6480-9
Citations 
PageRank 
References 
0
0.34
9
Authors
3
Name
Order
Citations
PageRank
Xiang Li144963.52
Weiwei Shang27713.89
Shuang Cong312933.36