Title
Learning to Follow Language Instructions with Adversarial Reward Induction.
Abstract
Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, for many real-world natural language commands that involve a degree of underspecification or ambiguity, such as tidy the room, it would be challenging or impossible to program an appropriate reward function. To overcome this, we present a method for learning to follow commands from a training set of instructions and corresponding example goal-states, rather than an explicit reward function. Importantly, the example goal-states are not seen at test time. The approach effectively separates the representation of what instructions require from how they can be executed. In a simple grid world, the method enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new training examples.
Year
Venue
Field
2018
arXiv: Artificial Intelligence
Computer science,Cognitive psychology,Adversarial system
DocType
Volume
Citations 
Journal
abs/1806.01946
4
PageRank 
References 
Authors
0.39
21
6
Name
Order
Citations
PageRank
Dzmitry Bahdanau12677117.03
Felix Hill234617.90
Jan Leike315015.49
Edward Hughes4267.67
Pushmeet Kohli57398332.84
Edward Grefenstette6174380.65