Title
Deep-Reinforcement-Learning-Based Mode Selection and Resource Allocation for Cellular V2X Communications
Abstract
Cellular vehicle-to-everything (V2X) communication is crucial to support future diverse vehicular applications. However, for safety-critical applications, unstable vehicle-to-vehicle (V2V) links, and high signaling overhead of centralized resource allocation approaches become bottlenecks. In this article, we investigate a joint optimization problem of transmission mode selection and resource allocation for cellular V2X communications. In particular, the problem is formulated as a Markov decision process, and a deep reinforcement learning (DRL)-based decentralized algorithm is proposed to maximize the sum capacity of vehicle-to-infrastructure users while meeting the latency and reliability requirements of V2V pairs. Moreover, considering training limitation of local DRL models, a two-timescale federated DRL algorithm is developed to help obtain robust models. Wherein, the graph theory-based vehicle clustering algorithm is executed on a large timescale and in turn, the federated learning algorithm is conducted on a small timescale. The simulation results show that the proposed DRL-based algorithm outperforms other decentralized baselines, and validate the superiority of the two-timescale federated DRL algorithm for newly activated V2V pairs.
Year
DOI
Venue
2020
10.1109/JIOT.2019.2962715
IEEE Internet of Things Journal
Keywords
DocType
Volume
Resource management,Vehicle-to-everything,Reliability,Clustering algorithms,Quality of service,Interference,Reinforcement learning
Journal
7
Issue
ISSN
Citations 
7
2327-4662
15
PageRank 
References 
Authors
0.49
0
4
Name
Order
Citations
PageRank
Xinran Zhang13812.02
Mugen Peng22779200.37
Shi Yan3383.91
Yaohua Sun41539.72