Title
Fast Recurrent Fully Convolutional Networks for Direct Perception in Autonomous Driving.
Abstract
Deep convolutional neural networks (CNNs) have been shown to perform extremely well at a variety of tasks including subtasks of autonomous driving such as image segmentation and object classification. However, networks designed for these tasks typically require vast quantities of training data and long training periods to converge. We investigate the design rationale behind end-to-end driving network designs by proposing and comparing three small and computationally inexpensive deep end-to-end neural network models that generate driving control signals directly from input images. In contrast to prior work that segments the autonomous driving task, our models take on a novel approach to the autonomous driving problem by utilizing deep and thin Fully Convolutional Nets (FCNs) with recurrent neural nets and low parameter counts to tackle a complex end-to-end regression task predicting both steering and acceleration commands. In addition, we include layers optimized for classification to allow the networks to implicitly learn image semantics. We show that the resulting networks use 3x fewer parameters than the most recent comparable end-to-end driving network and 500x fewer parameters than the AlexNet variations and converge both faster and to lower losses while maintaining robustness against overfitting.
Year
Venue
Field
2017
arXiv: Computer Vision and Pattern Recognition
Convolutional neural network,Computer science,Robustness (computer science),Image segmentation,Artificial intelligence,Acceleration,Overfitting,Design rationale,Artificial neural network,Machine learning,Semantics
DocType
Volume
Citations 
Journal
abs/1711.06459
1
PageRank 
References 
Authors
0.34
4
3
Name
Order
Citations
PageRank
Yiqi Hou110.34
Sascha Hornauer223.07
Karl Zipser321.70