Title
Near-Optimal Model-Free Reinforcement Learning in Non-Stationary Episodic MDPs
Abstract
We consider model-free reinforcement learning (RL) in non-stationary Markov decision processes. Both the reward functions and the state transition functions are allowed to vary arbitrarily over time as long as their cumulative variations do not exceed certain variation budgets. We propose Restarted Q-Learning with Upper Confidence Bounds (RestartQ-UCB), the first model-free algorithm for non-stationary RL, and show that it outperforms existing solutions in terms of dynamic regret. Specifically, RestartQ-UCB with Freedman-type bonus terms achieves a dynamic regret bound of (O) over tilde (S-1/3 A(1/3) Delta(1/3) HT2/3), where S and A are the numbers of states and actions, respectively, Delta > 0 is the variation budget, H is the number of time steps per episode, and T is the total number of time steps. We further show that our algorithm is nearly optimal by establishing an information-theoretical lower bound of Omega (S-1/3 A(1/3) Delta(1/3) HT2/3 T-2/3), the first lower bound in non-stationary RL. Numerical experiments validate the advantages of RestartQ-UCB in terms of both cumulative rewards and computational efficiency. We further demonstrate the power of our results in the context of multi-agent RL, where non-stationarity is a key challenge.
Year
Venue
DocType
2021
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139
Conference
Volume
ISSN
Citations 
139
2640-3498
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Weichao Mao113.73
Kaiqing Zhang24813.02
Ruihao Zhu300.68
David Simchi-Levi41449151.53
Tamer Basar53497402.11