Title
Grounding Human-To-Vehicle Advice For Self-Driving Vehicles
Abstract
Recent success suggests that deep neural control networks are likely to be a key component of self-driving vehicles. These networks are trained on large datasets to imitate human actions, but they lack semantic understanding of image contents. This makes them brittle and potentially unsafe in situations that do not match training data. Here, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present a first step toward advice giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Attention mechanisms tie controller behavior to salient objects in the advice. We evaluate our model on a novel advisable driving dataset with manually annotated human-to-vehicle advice called Honda Research Institute-Advice Dataset (HAD). We show that taking advice improves the performance of the end-to-end network, while the network cues on a variety of visual features that are provided by advice. The dataset is available at https://usa.honda-ri.com/HAD.
Year
DOI
Venue
2019
10.1109/CVP8.2019.01084
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
ISSN
Computer vision,Computer science,Ground,Human–computer interaction,Artificial intelligence
Conference
1063-6919
Citations 
PageRank 
References 
2
0.37
0
Authors
5
Name
Order
Citations
PageRank
Jinkyu Kim1163.07
Teruhisa Misu2195.89
Yi-Ting Chen3114.20
Ashish Tawari421916.07
John Canny5121231786.38