Title
Multimodal Skipgram Using Convolutional Pseudowords.
Abstract
This work studies the representational mapping across multimodal data such that given a piece of the raw data in one modality the corresponding semantic description in terms of the raw data in another modality is immediately obtained. Such a representational mapping can be found in a wide spectrum of real-world applications including image/video retrieval, object recognition, action/behavior recognition, and event understanding and prediction. To that end, we introduce a simplified training objective for learning multimodal embeddings using the skip-gram architecture by introducing convolutional "pseudowords:" embeddings composed of the additive combination of distributed word representations and image features from convolutional neural networks projected into the multimodal space. We present extensive results of the representational properties of these embeddings on various word similarity benchmarks to show the promise of this approach.
Year
Venue
Field
2015
CoRR
Architecture,Video retrieval,Feature (computer vision),Computer science,Convolutional neural network,Raw data,Speech recognition,Artificial intelligence,Behavior recognition,Natural language processing,Machine learning,Cognitive neuroscience of visual object recognition
DocType
Volume
Citations 
Journal
abs/1511.04024
0
PageRank 
References 
Authors
0.34
12
3
Name
Order
Citations
PageRank
zachary seymour100.34
yingming li200.34
Zhongfei (Mark) Zhang32451164.30