Title
Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability.
Abstract
Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of training driving dataset is limited (2) Lack of accident explanation ability when driving models donu0027t work as expected. To tackle these two problems, rooted on the believe that knowledge of associated easy task is benificial for addressing difficult task, we proposed a new driving model which is composed of perception module for textit{see and think} and driving module for textit{behave}, and trained it with multi-task perception-related basic knowledge and driving knowledge stepwisely. Specifically segmentation map and depth map (pixel level understanding of images) were considered as textit{what u0026 where} and textit{how far} knowledge for tackling easier driving-related perception problems before generating final control commands for difficult driving task. The results of experiments demonstrated the effectiveness of multi-task perception knowledge for better generalization and accident explanation ability. With our method the average sucess rate of finishing most difficult navigation tasks in untrained city of CoRL test surpassed current benchmark method for 15 percent in trained weather and 20 percent in untrained weathers. Demonstration video link is: this https URL
Year
Venue
Field
2018
arXiv: Computer Vision and Pattern Recognition
Computer science,Segmentation,Pixel,Artificial intelligence,Deep learning,Depth map,Perception,Machine learning
DocType
Volume
Citations 
Journal
abs/1809.11100
4
PageRank 
References 
Authors
0.39
15
5
Name
Order
Citations
PageRank
Zhihao Li113617.95
Toshiyuki Motoyoshi240.39
Kazuma Sasaki3593.87
Tetsuya Ogata41158135.73
Shigeki Sugano5689161.38