Title
Learning From Multiview Correlations In Open-Domain Videos
Abstract
An increasing number of datasets contain multiple views, such as video, sound and automatic captions. A basic challenge in representation learning is how to leverage multiple views to learn better representations. This is further complicated by the existence of a latent alignment between views, such as between speech and its transcription, and by the multitude of choices for the learning objective. We explore an advanced, correlation-based representation learning method on a 4-way parallel, multimodal dataset, and assess the quality of the learned representations on retrieval-based tasks. We show that the proposed approach produces rich representations that capture most of the information shared across views. Our best models for speech and textual modalities achieve retrieval rates from 70.7% to 96.9% on open-domain, user-generated instructional videos. This shows it is possible to learn reliable representations across disparate, unaligned and noisy modalities, and encourages using the proposed approach on larger datasets.
Year
DOI
Venue
2018
10.1109/icassp.2019.8683540
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Keywords
Field
DocType
Multiview Learning, Representation Learning, Canonical Correlation Analysis
Modalities,Multitude,Correlation,Artificial intelligence,Machine learning,Mathematics,Feature learning
Journal
Volume
ISSN
Citations 
abs/1811.08890
1520-6149
1
PageRank 
References 
Authors
0.34
17
5
Name
Order
Citations
PageRank
Nils Holzenberger112.03
Shruti Palaskar264.17
Pranava Madhyastha321.37
Florian Metze41069106.49
R. Arora548935.97