Abstract | ||
---|---|---|
We consider the problem of learning the ranking function that maximizes a generalization of the Wilcoxon-Mann-Whitney statistic on the training data. Relying on an $\epsilon$-accurate approximation for the error-function, we reduce the computational complexity of each iteration of a conjugate gradient algorithm for learning ranking functions from $\mathcal{O}(m^2)$, to $\mathcal{O}(m)$, where $m$ is the number of training samples. Experiments on public benchmarks for ordinal regression and collaborative filtering indicate that the proposed algorithm is as accurate as the best available methods in terms of ranking accuracy, when the algorithms are trained on the same data. However, since it is several orders of magnitude faster than the current state-of-the-art approaches, it is able to leverage much larger training datasets. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1109/TPAMI.2007.70776 | IEEE Trans. Pattern Anal. Mach. Intell. |
Keywords | Field | DocType |
training data,microeconomics,error function,ranking,information retrieval,artificial intelligence,search engines,ordinal regression,learning artificial intelligence,computational complexity,collaboration,computer simulation,wilcoxon mann whitney,conjugate gradient,collaborative filtering,algorithms,indexing terms,statistics,approximation algorithms,machine learning,regression analysis | Approximation algorithm,Error function,Search algorithm,Stability (learning theory),Ranking,Ranking SVM,Computer science,Algorithm,Ordinal regression,Artificial intelligence,Machine learning,Computational complexity theory | Journal |
Volume | Issue | ISSN |
30 | 7 | 0162-8828 |
Citations | PageRank | References |
20 | 0.98 | 19 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Vikas C. Raykar | 1 | 864 | 73.74 |
Ramani Duraiswami | 2 | 1721 | 161.98 |
Balaji Krishnapuram | 3 | 803 | 63.07 |