Title
Towards neuroimaging real-time driving using Convolutional Neural Networks
Abstract
The majority of previous attempts to autonomous driving in the TORCS environment involve the use of precalculated data such as the exact distance to opponents or the actual position of the car with respect to the center of the track. As humans, we drive based on locally available information that is gathered by our senses, mainly sight. This information is not precomputed and it is left to the agent to make sense of it. This work explores and evaluates the development of autonomous steering using visual-only input in real time driving. Convolutional Neural Networks (CNNs) have been proven to excel in categorical image classification, but little has been done on how they could perform in continuous spaces such as deciding steering values. The results presented here show that CNNs are indeed capable of performing well on such spaces and can be trained from example. Various modifications proved to enhance network accuracy and hence driving performance, such as applying edge detection filters to the input image, using a weighted average method to select network outputs and including unusual situations in the training data, making the neurovisual agent much more robust and capable of higher generalization power.
Year
DOI
Venue
2016
10.1109/CEEC.2016.7835901
2016 8th Computer Science and Electronic Engineering (CEEC)
Keywords
Field
DocType
convolutional neural networks,neuroimage,visual input,autonomous driving,TORCS
Training set,Categorical variable,Computer science,Convolutional neural network,Edge detection,Sight,Artificial intelligence,Neuroimaging,Contextual image classification,Machine learning
Conference
ISSN
ISBN
Citations 
2472-1530
978-1-5090-1275-6
0
PageRank 
References 
Authors
0.34
9
1
Name
Order
Citations
PageRank
Carlos Fernandez Musoles100.34