Title
Frustratingly Easy Cross-Modal Hashing.
Abstract
Cross-modal hashing has attracted considerable attention due to its low storage cost and fast retrieval speed. Recently, more and more sophisticated researches related to this topic are proposed. However, they seem to be inefficient computationally for several reasons. On one hand, learning coupled hash projections makes the iterative optimization problem challenging. On the other hand, individual collective binary codes for each content are also learned with a high computation complexity. In this paper we describe a simple yet effective cross-modal hashing approach that can be implemented in just three lines of code. This approach first obtains the binary codes for one modality via unimodal hashing methods (e.g., iterative quantization (ITQ)), then applies simple linear regression to project the other modalities into the obtained binary subspace. Obviously, it is non-iterative and parameter-free, which makes it more attractive for many real-world applications. We further compare our approach with other state-of-the-art methods on four benchmark datasets (i.e., the Wiki, VOC, LabelMe and NUS-WIDE datasets). Despite its extraordinary simplicity, our approach performs remarkably and generally well for these datasets under different experimental settings (i.e., large-scale, high-dimensional and multi-label datasets).
Year
DOI
Venue
2016
10.1145/2964284.2967218
ACM Multimedia
Keywords
Field
DocType
cross-modal hashing,cross-media retrieval,image and text,double alignment
LabelMe,Computer science,Universal hashing,Binary code,Feature hashing,Theoretical computer science,Hash function,Artificial intelligence,Optimization problem,Dynamic perfect hashing,Machine learning,Source lines of code
Conference
Citations 
PageRank 
References 
4
0.38
23
Authors
4
Name
Order
Citations
PageRank
Dekui Ma140.72
Jian Liang28414.98
Xiang-Wei Kong321215.09
Ran He41790108.39