Title
Grounded Language Learning Fast and Slow
Abstract
Recent work has shown that large text-based neural language models acquire a surprising propensity for one-shot learning. Here, we show that an agent situated in a simulated 3D world, and endowed with a novel dual-coding external memory, can exhibit similar one-shot word learning when trained with conventional RL algorithms. After a single introduction to a novel object via visual perception and language (\"This is a dax\"), the agent can manipulate the object as instructed (\"Put the dax on the bed\"), combining short-term, within-episode knowledge of the nonsense word with long-term lexical and motor knowledge. We find that, under certain training conditions and with a particular memory writing mechanism, the agent\u0027s one-shot word-object binding generalizes to novel exemplars within the same ShapeNet category, and is effective in settings with unfamiliar numbers of objects. We further show how dual-coding memory can be exploited as a signal for intrinsic motivation, stimulating the agent to seek names for objects that may be useful later. Together, the results demonstrate that deep neural networks can exploit meta-learning, episodic memory and an explicitly multi-modal environment to account for \u0027fast-mapping\u0027, a fundamental pillar of human cognitive development and a potentially transformative capacity for artificial agents.
Year
Venue
DocType
2021
ICLR
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
6
Name
Order
Citations
PageRank
Felix Hill134617.90
Olivier Tieleman212.72
Tamara von Glehn300.68
Nathaniel Wong400.68
Merzic Hamza500.68
Stephen Clark62369162.42