Title
Learning Occupancy Priors Of Human Motion From Semantic Maps Of Urban Environments
Abstract
Understanding and anticipating human activity is an important capability for intelligent systems in mobile robotics, autonomous driving, and video surveillance. While learning from demonstrations with on-site collected trajectory data is a powerful approach to discover recurrent motion patterns, generalization to new environments, where sufficient motion data are not readily available, remains a challenge. In many cases, however, semantic information about the environment is a highly informative cue for the prediction of pedestrian motion or the estimation of collision risks. In this work, we infer occupancy priors of human motion using only semantic environment information as input. To this end, we apply and discuss a traditional Inverse Optimal Control approach, and propose a novel approach based on Convolutional Neural Networks (CNN) to predict future occupancy maps. Our CNN method produces flexible context-aware occupancy estimations for semantically uniform map regions and generalizes well already with small amounts of training data. Evaluated on synthetic and real-world data, it shows superior results compared to several baselines, marking a qualitative step-up in semantic environment assessment.
Year
DOI
Venue
2021
10.1109/LRA.2021.3062010
IEEE ROBOTICS AND AUTOMATION LETTERS
Keywords
DocType
Volume
Deep learning for visual perception, human detection and tracking, human motion analysis, human motion prediction, semantic scene understanding
Journal
6
Issue
ISSN
Citations 
2
2377-3766
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Andrey Rudenko1242.27
Luigi Palmieri2655.38
Johannes Doellinger311.04
Achim J. Lilienthal41468113.18
Kai O. Arras599881.80