Title
Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input
Abstract
In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically coupled object and word detectors.
Year
DOI
Venue
2020
10.1007/s11263-019-01205-0
International Journal of Computer Vision
Keywords
DocType
Volume
Vision and language, Sound, Speech, Multimodal learning, Language acquisition, Visual object discovery, Unsupervised learning, Self-supervised learning
Journal
128
Issue
ISSN
Citations 
3
0920-5691
2
PageRank 
References 
Authors
0.39
0
6
Name
Order
Citations
PageRank
David F. Harwath1638.34
Adrià Recasens2746.55
Dídac Surís320.39
Galen Chuang420.72
Antonio Torralba514607956.27
James Glass63123413.63