Abstract | ||
---|---|---|
\kdtree \citefriedman1976algorithm has long been deemed unsuitable for exact nearest-neighbor search in high dimensional data. The theoretical guarantees and the empirical performance of \kdtree do not show significant improvements over brute-force nearest-neighbor search in moderate to high dimensions. \kdtree has been used relatively more successfully for approximate search \citemuja2009flann but lack theoretical guarantees. In the article, we build upon randomized-partition trees \citedasgupta2013randomized to propose \kdtree based approximate search schemes with $O(d łog d + łog n)$ query time for data sets with n points in d dimensions and rigorous theoretical guarantees on the search accuracy. We empirically validate the search accuracy and the query time guarantees of our proposed schemes, demonstrating the significantly improved scaling for same level of accuracy.
|
Year | DOI | Keywords |
---|---|---|
2019 | 10.1145/3292500.3330875 | nearest-neighbor search, randomized algorithms, similarity search, space-partitioning trees |
Field | DocType | ISSN |
Randomized algorithm,Data mining,Binary logarithm,Data set,Clustering high-dimensional data,Computer science,k-d tree,Algorithm,Scaling,Nearest neighbor search | Conference | 978-1-4503-6201-6 |
ISBN | Citations | PageRank |
978-1-4503-6201-6 | 2 | 0.45 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Parikshit Ram | 1 | 10 | 3.29 |
Kaushik Sinha | 2 | 244 | 17.81 |