Abstract | ||
---|---|---|
Can we leverage learning techniques to build a fast nearest-neighbor (NN) re- trieval data structure? We present a general learning framework for the NN prob- lem in which sample queries are used to learn the parameters of a data structure that minimize the retrieval time and/or the miss rate. We explore the potential of this novel framework through two popular NN data structures: KD-trees and the rectilinear structures employed by locality sensitive hashing. We derive a gener- alization theory for these data structure classes and present simple learning algo- rithms for both. Experimental results reveal that learning often improves on the already strong performance of these data structures. |
Year | Venue | Keywords |
---|---|---|
2007 | NIPS | locality sensitive hashing,data structure,nearest neighbor,kd tree,nearest neighbor search |
Field | DocType | Citations |
Generalizability theory,Locality-sensitive hashing,Data structure,Ball tree,Computer science,Artificial intelligence,Cover tree,Nearest neighbor search,Machine learning | Conference | 7 |
PageRank | References | Authors |
0.51 | 10 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Lawrence Cayton | 1 | 152 | 8.27 |
Sanjoy Dasgupta | 2 | 2052 | 172.00 |