Title
Deep Reinforcement Learning Based Resource Management for DNN Inference in Industrial IoT
Abstract
Performing deep neural network (DNN) inference in real time requires excessive network resources, which poses a big challenge to the resource-limited industrial Internet of things (IIoT) networks. To address the challenge, in this paper, we introduce an end-edge-cloud orchestration architecture, in which the inference task assignment and DNN model placement are flexibly coordinated. Specifically, the DNN models, trained and pre-stored in the cloud, are properly placed at the end and edge to perform DNN inference. To achieve efficient DNN inference, a multi-dimensional resource management problem is formulated to maximize the average inference accuracy while satisfying the strict delay requirements of inference tasks. Due to the mix-integer decision variables, it is difficult to solve the formulated problem directly. Thus, we transform the formulated problem into a Markov decision process which can be solved efficiently. Furthermore, a deep reinforcement learning based resource management scheme is proposed to make real-time optimal resource allocation decisions. Simulation results are provided to demonstrate that the proposed scheme can efficiently allocate the available spectrum, caching, and computing resources, and improve average inference accuracy by 31.4$\%$ compared with the deep deterministic policy gradient benchmark.
Year
DOI
Venue
2021
10.1109/TVT.2021.3068255
IEEE Transactions on Vehicular Technology
Keywords
DocType
Volume
DNN inference,industrial IoT,resource management,deep reinforcement learning
Journal
70
Issue
ISSN
Citations 
8
0018-9545
6
PageRank 
References 
Authors
0.40
0
7
Name
Order
Citations
PageRank
Zhang Wei139253.03
Dong Yang2182.64
Peng Haixia360.40
Wen Wu460.40
Wei Quan510011.79
Hongke Zhang660.40
Xuemin Shen715389928.67