Title
DARLA: Improving Zero-Shot Transfer in Reinforcement Learning.
Abstract
Domain adaptation is an important open problem in deep reinforcement learning (RL). In many scenarios of interest data is hard to obtain, so agents may learn a source policy in a setting where data is readily available, with the hope that it generalises well to the target domain. We propose a new multi-stage RL agent, DARLA (DisentAngled Representation Learning Agent), which learns to see before learning to act. DARLAu0027s vision is based on learning a disentangled representation of the observed environment. Once DARLA can see, it is able to acquire source policies that are robust to many domain shifts - even with no access to the target domain. DARLA significantly outperforms conventional baselines in zero-shot domain adaptation scenarios, an effect that holds across a variety of RL environments (Jaco arm, DeepMind Lab) and base RL algorithms (DQN, A3C and EC).
Year
Venue
DocType
2017
ICML
Journal
Volume
Citations 
PageRank 
abs/1707.08475
19
0.71
References 
Authors
29
9
Name
Order
Citations
PageRank
Irina Higgins124511.95
Arka Pal22217.85
Andrei A. Rusu3190.71
Loïc Matthey423910.16
Christopher Burgess52369.62
alexander pritzel652120.08
Matthew M Botvinick749425.34
Charles Blundell882241.64
Alexander Lerchner925611.70