Title
Reinforcement learning for power management in wireless multimedia communications
Abstract
We consider the problem of energy-efficient point-to-point transmission of delay-sensitive data (e.g. multimedia data) over a fading channel. We propose a rigorous and unified framework for simultaneously utilizing both physical-layer and system-level techniques to minimize energy consumption, under delay constraints, in the presence of stochastic and unknown traffic and channel conditions. We formulate the problem as a Markov decision process and solve it online using reinforcement learning. The advantages of the proposed online method are that (i) it does not require a priori knowledge of the traffic arrival and channel statistics to determine the jointly optimal physical-layer and system-level power management strategies; (ii) it exploits partial information about the system so that less information needs to be learned than when using conventional reinforcement learning algorithms; and (iii) it obviates the need for action exploration, which severely limits the adaptation speed and run-time performance of conventional reinforcement learning algorithms.
Year
DOI
Venue
2011
10.1109/ICME.2011.6012018
Multimedia and Expo
Keywords
Field
DocType
Energy-efficient wireless multimedia communication,Markov decision process,adaptive modulation and coding,dynamic power management,power-control,reinforcement learning
Wireless,Markov process,Fading,Computer science,Power control,Markov decision process,Communication channel,Energy consumption,Multimedia,Reinforcement learning,Distributed computing
Conference
ISSN
ISBN
Citations 
1945-7871 E-ISBN : 978-1-61284-349-0
978-1-61284-349-0
0
PageRank 
References 
Authors
0.34
8
2
Name
Order
Citations
PageRank
Nicholas Mastronarde124026.93
Mihaela Van Der Schaar23968352.59