Title
MSAP - Multi-Step Adversarial Perturbations on Recommender Systems Embeddings.
Abstract
Recommender systems (RSs) have attained exceptional performance in learning users\u0027 preferences and finding the most suitable products. Recent advances in adversarial machine learning (AML) in computer vision have raised interests in recommenders\u0027 security.It has been demonstrated that widely adopted model-based recommenders, e.g., BPR-MF, are not robust to adversarial perturbations added on the learned parameters, e.g., users\u0027 embeddings, which can cause drastic reduction of recommendation accuracy.However, the state-of-the-art adversarial method, named fast gradient sign method (FGSM), builds the perturbation with a single-step procedure. In this work, we extend the FGSM method proposing multi-step adversarial perturbation (MSAP) procedures to study the recommenders\u0027 robustness under powerful methods. Letting fixed the perturbation magnitude, we illustrate that MSAP is much more harmful than FGSM in corrupting the recommendation performance of BPR-MF. Then, we assess the MSAP efficacy on a robustified version of BPR-MF, i.e., AMF. Finally, we analyze the variations of fairness measurements on each perturbed recommender. Code and data are available at https://github.com/sisinflab/MSAP.
Year
DOI
Venue
2021
10.32473/flairs.v34i1.128443
FLAIRS Conference
DocType
Volume
Issue
Conference
34
1
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Vito Walter Anelli19118.45
Alejandro Bellogín200.34
Yashar Deldjoo318624.74
Tommaso Di Noia400.34
Felice Antonio Merra5326.46