Abstract | ||
---|---|---|
Mobile edge computing allows resource-constrained mobile devices to offload their computationally-intensive tasks to powerful devices at the edge of the network. Designing an optimal offloading strategy for deciding whether to execute a task locally or at an edge device has attracted much attention recently. The existing literature focuses mostly on settings with a single mobile device or a single edge device, making the results unsuitable for real-world situations. This paper considers a setting consisting of multiple non-cooperative mobile devices and multiple edge devices, and aims to design offloading policies that minimize the task drop rate as well as the execution delay without requiring information about the dynamics of the environment such as channel models and task arrival rates at mobile devices. This non-cooperative resource allocation problem is only partially observable to each mobile device. We propose a method to mitigate the partial observability and then apply a deep reinforcement learning-based policy that progressively learns the dynamics of the environment as well as the long-term consequences of decisions. Numerical results demonstrate that the proposed algorithm significantly reduces the task drop rate compared to existing offloading policies, while minimizing energy and computation cost of each mobile device. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/GLOBECOM38437.2019.9013115 | IEEE Global Communications Conference |
Keywords | DocType | ISSN |
Mobile edge computing,reinforcement learning,task offloading | Conference | 2334-0983 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Javad Heydari | 1 | 0 | 0.34 |
Viswanath Ganapathy | 2 | 0 | 0.34 |
mohak shah | 3 | 36 | 8.78 |