Title
A Reinforcement Learning Based Low-Delay Scheduling With Adaptive Transmission
Abstract
As modern communication systems become indispensable, the requirements for communication systems such as delay and power get more stringent. In this paper, we adopt a Reinforcement Learning (RL) based approach to obtain the optimal trade-off between delay and power consumption for a given power constraint in a communication system whose conditions (e.g., channel conditions, traffic arrival rates) can change over time. To this end, we first formulate this problem as an infinite-horizon Markov Decision Process (MDP) and then Q-learning is adopted to solve this problem. To handle the given power constraint, we apply the Lagrange multiplier method that transforms a constrained optimization problem into a non-constrained problem. Finally, via simulation, we show that Q-learning achieves the optimal policy.
Year
DOI
Venue
2019
10.1109/ICTC46691.2019.8939680
2019 International Conference on Information and Communication Technology Convergence (ICTC)
Keywords
Field
DocType
Reinforcement Learning,delay-power tradeoff,adaptive transmission,infinite-horizon Markov Decision Process
Transmission (mechanics),Mathematical optimization,Lagrange multiplier,Scheduling (computing),Computer science,Markov decision process,Communications system,Communication channel,Low delay,Reinforcement learning
Conference
ISSN
ISBN
Citations 
2162-1233
978-1-7281-0894-0
0
PageRank 
References 
Authors
0.34
3
2
Name
Order
Citations
PageRank
Yu Zhao100.34
Joohyun Lee242.79