Title
Action-Bounding for Reinforcement Learning in Energy Harvesting Communication Systems.
Abstract
In this paper, we consider a power allocation problem for energy harvesting communication systems, where a transmitter wants to send the desired messages to the receiver with the harvested energy in its rechargeable battery. We propose a new power allocation strategy based on deep reinforcement learning technique to maximize the expected total transmitted data for a given random energy arrival and random channel process. The key idea of our scheme is to lead the transmitter, rather than learning the undesirable power allocation policies, by an action-bounding technique using only causal knowledge of the energy and channel processes. This technique helps traditional reinforcement learning algorithms to work more accurately in the systems, and increases the performance of the learning algorithms. Moreover, we show that the proposed scheme achieves better performance with respect to the expected total transmitted data compared to existing power allocation strategies.
Year
DOI
Venue
2018
10.1109/GLOCOM.2018.8647681
IEEE Global Communications Conference
Keywords
Field
DocType
energy harvesting communication systems,power allocation,throughput maximization,deep reinforcement learning,deep Q-learning,actor critic
Transmitter,Computer science,Energy harvesting,Communication channel,Computer network,Communications system,Battery (electricity),Reinforcement learning,Bounding overwatch,Distributed computing
Conference
ISSN
Citations 
PageRank 
2334-0983
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Heasung Kim100.68
Heecheol Yang212.06
Yeongmo Kim300.34
Jungwoo Lee41467156.34