Title
Avoiding moving obstacles with stochastic hybrid dynamics using PEARL: PrEference Appraisal Reinforcement Learning
Abstract
Manual derivation of optimal robot motions for task completion is difficult, especially when a robot is required to balance its actions between opposing preferences. One solution has been proposed to automatically learn near optimal motions with Reinforcement Learning (RL). This has been successful for several tasks including swing-free UAV flight, table tennis, and autonomous driving. However, high-dimensional problems remain a challenge. We address this dimensionality constraint with PrEference Appraisal Reinforcement Learning (PEARL), which solves tasks with opposing preferences for acceleration controlled robots. PEARL projects the high-dimensional continuous robot state space to a low dimensional preference feature space resulting in efficient and adaptable planning. We demonstrate that on a dynamic obstacle avoidance robotic task, a single learning on a much simpler problem performs real-time decision-making for significantly larger, high-dimensional problems working in unbounded continuous states and actions. We trained the agent with 4 static obstacles, while the trained agent avoids up to 900 moving obstacles with complex hybrid stochastic obstacle dynamics in a highly constrained space using only limited information about the environment. We compare these tasks to traditional, often manually tuned solutions for these high-dimensional problems.
Year
DOI
Venue
2016
10.1109/ICRA.2016.7487169
2016 IEEE International Conference on Robotics and Automation (ICRA)
Keywords
Field
DocType
complex hybrid stochastic obstacle dynamics,static obstacle,dynamic obstacle avoidance robotic task,high-dimensional continuous robot state space,acceleration control,dimensionality constraint,optimal robot motion,preference appraisal reinforcement learning,PEARL,stochastic hybrid dynamics
Obstacle avoidance,Robot learning,Obstacle,Feature vector,Curse of dimensionality,Control engineering,Artificial intelligence,Engineering,Robot,State space,Reinforcement learning
Conference
Volume
Issue
ISSN
2016
1
1050-4729
Citations 
PageRank 
References 
4
0.49
11
Authors
4
Name
Order
Citations
PageRank
Aleksandra Faust16814.83
Hao-Tien Chiang2231.95
Nathanael Rackley340.49
Lydia Tapia419424.66