Title
Optimal Myopic Policy for Restless Bandit: A Perspective of Eigendecomposition
Abstract
This paper considers restless multiarmed bandit (RMAB) problem involving partially observed Markov decision processes with heterogeneous state transition matrices in discrete time slots. Due to the notorious PSPACE-hardness of the generic RMAB, we turn to the myopic policy, i.e., scheduling the best processes each time, since the myopic policy is one of the policies with linear computation complexity and popularly adopted in practice. Intuitively, the myopic policy is not optimal because it only exploits information without exploring. In this paper, to show the optimality of the myopic policy, we first generalize the concept of the first order stochastic dominance ordering and then decompose each state transition matrix into a weighted sum of a series of eigenmatrices in which the weights are determined by the corresponding eigenvalues. Furthermore, we identify two sets of sufficient conditions on both eigenvalues and eigenmatrices to guarantee the optimality of the myopic policy for two instances, respectively. The theorectical results provide some insights into the structure of the optimal policy for the generic RMAB.
Year
DOI
Venue
2022
10.1109/JSTSP.2022.3142502
IEEE Journal of Selected Topics in Signal Processing
Keywords
DocType
Volume
Restless bandit,optimality,myopic policy,eigen decomposition,stochastic order
Journal
16
Issue
ISSN
Citations 
3
1932-4553
0
PageRank 
References 
Authors
0.34
24
5
Name
Order
Citations
PageRank
K Wang100.34
J Yu200.34
C. L. Philip Chen34022244.76
P Zhou400.34
Moe Z. Win52225196.12