Title
Learning A State Representation And Navigation In Cluttered And Dynamic Environments
Abstract
In this work, we present a learning-based pipeline to realise local navigation with a quadrupedal robot in cluttered environments with static and dynamic obstacles. Given high-level navigation commands, the robot is able to safely locomote to a target location based on frames from a depth camera without any explicit mapping of the environment. First, the sequence of images and the current trajectory of the camera are fused to form a model of the world using state representation learning. The output of this lightweight module is then directly fed into a target-reaching and obstacle-avoiding policy trained with reinforcement learning. We show that decoupling the pipeline into these components results in a sample efficient policy learning stage that can be fully trained in simulation in just a dozen minutes. The key part is the state representation, which is trained to not only estimate the hidden state of the world in an unsupervised fashion, but also helps bridging the reality gap, enabling successful sim-to-real transfer. In our experiments with the quadrupedal robot ANYmal in simulation and in reality, we show that our system can handle noisy depth images, avoid dynamic obstacles unseen during training, and is endowed with local spatial awareness.
Year
DOI
Venue
2021
10.1109/LRA.2021.3068639
IEEE ROBOTICS AND AUTOMATION LETTERS
Keywords
DocType
Volume
Collision avoidance, representation learning, vision-based navigation
Journal
6
Issue
ISSN
Citations 
3
2377-3766
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
David Hoeller100.34
Lorenz Wellhausen2314.73
Farbod Farshidian314.41
Marco Hutter4598.36