Title
Crossmodal attentive skill learner: learning in Atari and beyond with audio-video inputs.
Abstract
This paper introduces the Crossmodal Attentive Skill Learner (CASL), integrated with the recently-introduced Asynchronous Advantage Option-Critic architecture [Harb et al. in When waiting is not an option: learning options with a deliberation cost. arXiv preprint arXiv:1709.04571, 2017] to enable hierarchical reinforcement learning across multiple sensory inputs. Agents trained using our approach learn to attend to their various sensory modalities (e.g., audio, video) at the appropriate moments, thereby executing actions based on multiple sensory streams without reliance on supervisory data. We demonstrate empirically that the sensory attention mechanism anticipates and identifies useful latent features, while filtering irrelevant sensor modalities during execution. Further, we provide concrete examples in which the approach not only improves performance in a single task, but accelerates transfer to new tasks. We modify the Arcade Learning Environment [Bellemare et al. in J Artif Intell Res 47:253–279, 2013] to support audio queries (ALE-audio code available at https://github.com/shayegano/Arcade-Learning-Environment), and conduct evaluations of crossmodal learning in the Atari 2600 games H.E.R.O. and Amidar. Finally, building on the recent work of Babaeizadeh et al. [in: International conference on learning representations (ICLR), 2017], we open-source a fast hybrid CPU–GPU implementation of CASL (CASL code available at https://github.com/shayegano/CASL).
Year
DOI
Venue
2020
10.1007/s10458-019-09439-5
Autonomous Agents and Multi-Agent Systems
Keywords
DocType
Volume
Hierarchical learning, Reinforcement learning, Multimodal learning
Journal
34
Issue
ISSN
Citations 
1
1387-2532
0
PageRank 
References 
Authors
0.34
25
4
Name
Order
Citations
PageRank
dong ki kim1185.65
Shayegan Omidshafiei26010.34
Jason Pazis31046.97
Jonathan How41759185.09