Abstract | ||
---|---|---|
Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly back-propagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios. |
Year | Venue | Field |
---|---|---|
2018 | PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1 | Inference,End-to-end principle,Computer science,Latent variable,Hash function,Artificial intelligence,Generative grammar,Machine learning,Nearest neighbor search,Bernoulli's principle,Binary number |
DocType | Volume | Citations |
Journal | abs/1805.05361 | 2 |
PageRank | References | Authors |
0.36 | 31 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dinghan Shen | 1 | 108 | 10.37 |
Qinliang Su | 2 | 55 | 10.07 |
Paidamoyo Chapfuwa | 3 | 2 | 1.04 |
Wenlin Wang | 4 | 2 | 2.05 |
Guoyin Wang | 5 | 24 | 7.38 |
Lawrence Carin | 6 | 2 | 1.37 |
Ricardo Henao | 7 | 286 | 23.85 |