Title
Generative Adversarial Networks for Multimodal Representation Learning in Video Hyperlinking.
Abstract
Continuous multimodal representations suitable for multimodal information retrieval are usually obtained with methods that heavily rely on multimodal autoencoders. In video hyperlinking, a task that aims at retrieving video segments, the state of the art is a variation of two interlocked networks working in opposing directions. These systems provide good multimodal embeddings and are also capable of translating from one representation space to the other. Operating on representation spaces, these networks lack the ability to operate in the original spaces (text or image), which makes it difficult to visualize the crossmodal function, and do not generalize well to unseen data. Recently, generative adversarial networks have gained popularity and have been used for generating realistic synthetic data and for obtaining high-level, single-modal latent representation spaces. In this work, we evaluate the feasibility of using GANs to obtain multimodal representations. We show that GANs can be used for multimodal representation learning and that they provide multimodal representations that are superior to representations obtained with multimodal autoencoders. Additionally, we illustrate the ability of visualizing crossmodal translations that can provide human-interpretable insights on learned GAN-based video hyperlinking models.
Year
DOI
Venue
2017
10.1145/3078971.3079038
ICMR
Keywords
DocType
Volume
video hyperlinking, multimedia retrieval, multimodal embedding, multimodal autoencoders, representation learning, unsupervised learning, generative adversarial networks, neural networks
Conference
abs/1705.05103
Citations 
PageRank 
References 
3
0.38
9
Authors
3
Name
Order
Citations
PageRank
Vedran Vukotic1294.59
christian raymond211813.80
guillaume gravier31413127.38