Title
A distributional view on multi-objective policy optimization.
Abstract
Many real-world problems require trading off multiple competing objectives. However, these objectives are often in different units and/or scales, which can make it challenging for practitioners to express numerical preferences over objectives in their native units. In this paper we propose a novel algorithm for multi-objective reinforcement learning that enables setting desired preferences for objectives in a scale-invariant way. We propose to learn an action distribution for each objective, and we use supervised learning to fit a parametric policy to a combination of these distributions. We demonstrate the effectiveness of our approach on challenging high-dimensional real and simulated robotics tasks, and show that setting different preferences in our framework allows us to trace out the space of nondominated solutions.
Year
Venue
DocType
2020
ICML
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
10
Name
Order
Citations
PageRank
Abbas Abdolmaleki14612.82
Sandy H. Huang2674.65
Leonard Hasenclever3205.42
M. Neunert4659.95
H. Francis Song51055.14
Martina Zambelli611.03
Murilo F. Martins710.69
Nicolas Heess8176294.77
R. Hadsell91678100.80
Martin Riedmiller105655366.29