Abstract | ||
---|---|---|
We propose a distributed learning to rank method, and demonstrate its effectiveness in web-scale image retrieval. With the increasing amount of data, it is not applicable to train a centralized ranking model for any large scale learning problems. In distributed learning, the discrepancy between the training subsets and the whole when building the models are non-trivial but overlooked in the previous work. In this paper, we firstly include a cost factor to boosting algorithms to balance the individual models toward the whole data. Then, we propose to decompose the original algorithm to multiple layers, and their aggregation forms a superior ranker which can be easily scaled up to billions of images. The extensive experiments show the proposed method outperforms the straightforward aggregation of boosting algorithms. |
Year | Venue | Keywords |
---|---|---|
2014 | EUSIPCO | balance learning,big data,distributed learning,learning (artificial intelligence),boosting algorithm,Web-scale image retrieval,centralized ranking model,image retrieval,Internet,large scale learning problem,Big Data,learning to rank |
Field | DocType | ISSN |
Online machine learning,Learning to rank,Data mining,Semi-supervised learning,Instance-based learning,Active learning (machine learning),Computer science,Unsupervised learning,Boosting (machine learning),Artificial intelligence,Computational learning theory,Machine learning | Conference | 2076-1465 |
Citations | PageRank | References |
0 | 0.34 | 9 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Guanqun Cao | 1 | 28 | 2.71 |
Iftikhar Ahmad | 2 | 156 | 27.06 |
Honglei Zhang | 3 | 100 | 13.45 |
Weiyi Xie | 4 | 4 | 0.80 |
Moncef Gabbouj | 5 | 3282 | 386.30 |