Title
Reinforcement Learning Based Computation Migration for Vehicular Cloud Computing.
Abstract
By employing the exponentially increasing communication and computing capabilities of vehicles brought by the development of connected and autonomous vehicles, vehicular cloud computing (VCC) can improve the overall computational efficiency by offloading the computing tasks from the edge or remote cloud. In this paper, we study the computation migration problem in VCC, where a vehicle transfers unfinished computing missions to other vehicles before leaving a network edge to avoid mission failures. Specifically, we consider a computing mission offloaded from edge cloud to the vehicular cloud. The mission has a linear logical topology, i.e., consisting of tasks which should be executed sequentially. The migration problem is formulated as a sequential decision making problem aiming to minimize the overall response time. Considering the vehicular mobility, communication time, and heterogeneous vehicular computing capabilities, the problem is difficult to model and solve. We thus propose a novel on-policy reinforcement learning based computation migration scheme, which learns on-the-fly the optimal policy of the dynamic environment. Numerical results demonstrate that the proposed scheme can adapt to the uncertain and changing environment, and guarantee low computing latency.
Year
DOI
Venue
2018
10.1109/GLOCOM.2018.8647996
IEEE Global Communications Conference
Keywords
Field
DocType
vehicular cloud,computation migration,vehicular mobility,decision making,reinforcement learning
Logical topology,Latency (engineering),Computer science,Computer network,Response time,Edge device,Reinforcement learning,Computation,Cloud computing
Conference
ISSN
Citations 
PageRank 
2334-0983
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Fei Sun1237.74
Nan Cheng297081.34
Shan Zhang336525.66
Zhou, H.416614.18
Lin Gui516924.46
Xuemin Shen615389928.67