Abstract | ||
---|---|---|
Recommender systems are used to suggest customized products to users. Most recommender algorithms create collaborative models by taking advantage of web user profiles. In the last years, in the area of recommender systems, the Netflix contest has been very attractive for the researchers. However, many recent papers on recommender systems present results evaluated with the methodology used in the Netflix contest in domains where the objectives are different from the contest (e.g., top-N recommendation task). In this paper we do not propose new recommender algorithms but, rather, we compare different aspects of the official Netflix contest methodology based on RMSE and holdout with methodologies based on k-fold and classification accuracy metrics.We show, with case studies, that different evaluation methodologies lead to totally contrasting conclusions about the quality of recommendations. |
Year | DOI | Venue |
---|---|---|
2009 | 10.1109/WAINA.2009.127 | AINA Workshops |
Keywords | Field | DocType |
different aspect,collaborative model,new recommender algorithm,recommender system,different evaluation methodology,recommender algorithm,official netflix contest methodology,netflix contest,recommender algorithms,case study,classification accuracy metrics,noise measurement,user interfaces,groupware,recommender systems,collaboration,internet,evaluation,motion pictures,data mining,web mining,metrics,probability density function,testing | Recommender system,Collaborative software,Computer science,CONTEST,Algorithm,User interface,The Internet | Conference |
Citations | PageRank | References |
11 | 0.76 | 7 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Elica Campochiaro | 1 | 11 | 0.76 |
Riccardo Casatta | 2 | 11 | 0.76 |
Paolo Cremonesi | 3 | 1306 | 87.23 |
Roberto Turrin | 4 | 859 | 34.94 |