Abstract | ||
---|---|---|
Similarity search, or finding approximate nearest neighbors, is an important technique for many applications. Many recent research demonstrate that hashing methods can achieve promising results for large scale similarity search due to its computational and memory efficiency. However, most existing hashing methods treat all hashing bits equally and the distance between data examples is calculated as the Hamming distance between their hashing codes, while different hashing bits may carry different amount of information. This paper proposes a novel method, named Weighted Hashing (WeiHash), to assign different weights to different hashing bits. The hashing codes and their corresponding weights are jointly learned in a unified framework by simultaneously preserving the similarity between data examples and balancing the variance of each hashing bit. An iterative coordinate descent optimization algorithm is designed to derive desired hashing codes and weights. Extensive experiments on two large scale datasets demonstrate the superior performance of the proposed research over several state-of-the-art hashing methods. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1145/2505515.2507851 | CIKM |
Keywords | Field | DocType |
weighted hashing,large scale datasets,large scale similarity search,hamming distance,different amount,proposed research,data example,fast large scale similarity,different weight,recent research,similarity search,hashing | Locality-sensitive hashing,Data mining,Pattern recognition,Computer science,Universal hashing,Tabulation hashing,K-independent hashing,Artificial intelligence,Consistent hashing,Dynamic perfect hashing,Hash table,Linear hashing | Conference |
Citations | PageRank | References |
5 | 0.43 | 9 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Qifan Wang | 1 | 209 | 17.19 |
Dan Zhang | 2 | 461 | 22.17 |
Luo Si | 3 | 2498 | 169.52 |