Title
Dynamic Weights in Multi-Objective Deep Reinforcement Learning.
Abstract
Many real-world decision problems are characterized by multiple objectives which must be balanced based on their relative importance. In the dynamic weights setting this relative importance changes over time, as recognized by Natarajan and Tadepalli (2005) who proposed a tabular Reinforcement Learning algorithm to deal with this problem. However, this earlier work is not feasible for reinforcement learning settings in which the input is high-dimensional, necessitating the use of function approximators, such as neural networks. We propose two novel methods for multi-objective RL with dynamic weights, a multi-network approach and a single-network approach that conditions on the weights. Due to the inherent non-stationarity of the dynamic weights setting, standard experience replay techniques are insufficient. We therefore propose diverse experience replay, a framework to maintain a diverse set of experiences in the replay buffer, and show how it can be applied to make experience replay relevant in multi-objective RL. To evaluate the performance of our algorithms we introduce a new benchmark called the Minecart problem. We show empirically that our algorithms outperform more naive approaches. We also show that, while there are significant differences between many small changes in the weights opposed to sparse larger changes, the conditioned network with diverse experience replay consistently outperforms the other algorithms.
Year
Venue
Field
2018
arXiv: Learning
Computer science,Artificial intelligence,Machine learning,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1809.07803
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Axel Abels100.34
Diederik M. Roijers219824.72
Tom Lenaerts327653.44
Ann Nowé4971123.04
Denis Steckelmacher502.03