Abstract | ||
---|---|---|
Learning-to-rank for information retrieval has gained increasing interest in recent years. Inspired by the success of sparse models, we consider the problem of sparse learning-to-rank, where the learned ranking models are constrained to be with only a few nonzero coefficients. We begin by formulating the sparse learning-to-rank problem as a convex optimization problem with a sparse-inducing ℓ1 constraint. Since the ℓ1 constraint is nondifferentiable, the critical issue arising here is how to efficiently solve the optimization problem. To address this issue, we propose a learning algorithm from the primal dual perspective. Furthermore, we prove that, after at most O(1/ε) iterations, the proposed algorithm can guarantee the obtainment of an ε-accurate solution. This convergence rate is better than that of the popular subgradient descent algorithm. i.e., O(1/ε2). Empirical evaluation on several public benchmark data sets demonstrates the effectiveness of the proposed algorithm: 1) Compared to the methods that learn dense models, learning a ranking model with sparsity constraints significantly improves the ranking accuracies. 2) Compared to other methods for sparse learning-to-rank, the proposed algorithm tends to obtain sparser models and has superior performance gain on both ranking accuracies and training time. 3) Compared to several state-of-the-art algorithms, the ranking accuracies of the proposed algorithm are very competitive and stable. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1109/TC.2012.62 | IEEE Trans. Computers |
Keywords | Field | DocType |
subgradient descent algorithm,ranking accuracy improvement,performance gain,state-of-the-art algorithm,convex optimization problem,fenchel duality,sparse-inducing l1 constraint,learning (artificial intelligence),primal-dual algorithm,ranking model,information retrieval,ranking accuracy,convex programming,nonzero coefficients,sparse model,dense model learning,optimization problem,sparse learning-to-rank problem,proposed algorithm,gradient methods,learned ranking models,sparse models,sparse learning-to-rank,ranking algorithm,efficient primal-dual algorithm,popular subgradient descent algorithm,sparsity constraints,learning-to-rank,learning algorithm,vectors,accuracy,prediction algorithms,computational modeling,optimization,support vector machines,learning artificial intelligence,learning to rank | Learning to rank,Diffusing update algorithm,Mathematical optimization,Ranking SVM,Ranking,Subgradient method,Computer science,Support vector machine,Convex optimization,Optimization problem | Journal |
Volume | Issue | ISSN |
62 | 6 | 0018-9340 |
Citations | PageRank | References |
25 | 0.78 | 21 |
Authors | ||
5 |