Title
Reinforcement-Learning- and Belief-Learning-Based Double Auction Mechanism for Edge Computing Resource Allocation
Abstract
In recent years, we have witnessed the compelling application of the Internet of Things (IoT) in our daily life, ranging from daily living to industrial production. On account of the computation and power constraints, the IoT devices have to offload their tasks to the remote cloud services. However, the long-distance transmission poses significant challenges for latency-sensitive businesses, such as autonomous driving and industrial control. As a remedy, mobile edge computing (MEC) is deployed at the edge of the network to reduce the transmission delay. With the MEC joining in, how to allocate the limited computing resource of MEC is a critical problem to guarantee efficient working of the whole IoT system. In this article, we formulate the resource management among MEC and IoT devices as a double auction game. Also, for searching the Nash equilibrium, we introduce the experience-weighted attraction (EWA) algorithm performing behind each participant. With this AI method, auction participants acquire and accumulate experience by observing others’ behavior and doing introspection, which accelerates the trading policy’s learning process of each agent in such an opaque environment. Some simulation results are presented to evaluate the convergence and correctness of our architecture and algorithm.
Year
DOI
Venue
2020
10.1109/JIOT.2019.2953108
IEEE Internet of Things Journal
Keywords
DocType
Volume
Task analysis,Resource management,Games,Cloud computing,Heuristic algorithms,Reinforcement learning
Journal
7
Issue
ISSN
Citations 
7
2327-4662
2
PageRank 
References 
Authors
0.36
0
5
Name
Order
Citations
PageRank
Quanyi Li120.36
Haipeng Yao214317.59
Tianle Mai3273.43
Chunxiao Jiang42064161.92
Yan Zhang55818354.13