Title
Scheduling the Operation of a Connected Vehicular Network Using Deep Reinforcement Learning
Abstract
Driven by the expeditious evolution of the Internet of Things, the conventional vehicular ad hoc networks will progress toward the Internet of Vehicles (IoV). With the rapid development of computation and communication technologies, IoV promises huge commercial interest and research value, thereby attracting a large number of companies and researchers. In an effort to satisfy the driver’s well-being and demand for continuous connectivity in the IoV era, this paper addresses both safety and quality-of-service (QoS) concerns in a green, balanced, connected, and efficient vehicular network. Using the recent advances in training deep neural networks, we exploit the deep reinforcement learning model, namely deep Q-network, which learns a scheduling policy from high-dimensional inputs corresponding to the current characteristics of the underlying model. The realized policy serves to extend the lifetime of the battery-powered vehicular network while promoting a safe environment that meets acceptable QoS levels. Our presented deep reinforcement learning model is found to outperform several scheduling benchmarks in terms of completed request percentage (10–25%), mean request delay (10–15%), and total network lifetime (5–65%).
Year
DOI
Venue
2019
10.1109/tits.2018.2832219
IEEE Transactions on Intelligent Transportation Systems
Keywords
Field
DocType
Machine learning,Optimal scheduling,Internet,Quality of service,Roads,Vehicle dynamics,Safety
Scheduling (computing),Simulation,Computer network,Quality of service,Exploit,Vehicle dynamics,Engineering,Wireless ad hoc network,Computation,The Internet,Reinforcement learning
Journal
Volume
Issue
ISSN
20
5
1524-9050
Citations 
PageRank 
References 
2
0.35
0
Authors
3
Name
Order
Citations
PageRank
Ribal Atallah1546.70
Chadi Assi21357137.73
Maurice Khabbaz318814.12