Title
An Empirical Analysis of Backward Compatibility in Machine Learning Systems
Abstract
In many applications of machine learning (ML), updates are performed with the goal of enhancing model performance. However, current practices for updating models rely solely on isolated, aggregate performance analyses, overlooking important dependencies, expectations, and needs in real-world deployments. We consider how updates, intended to improve ML models, can introduce new errors that can significantly affect downstream systems and users. For example, updates in models used in cloud-based classification services, such as image recognition, can cause unexpected erroneous behavior in systems that make calls to the services. Prior work has shown the importance of "backward compatibility" for maintaining human trust. We study challenges with backward compatibility across different ML architectures and datasets, focusing on common settings including data shifts with structured noise and ML employed in inferential pipelines. Our results show that (i) compatibility issues arise even without data shift due to optimization stochasticity, (ii) training on large-scale noisy datasets often results in significant decreases in backward compatibility even when model accuracy increases, and (iii) distributions of incompatible points align with noise bias, motivating the need for compatibility aware de-noising and robustness methods.
Year
DOI
Venue
2020
10.1145/3394486.3403379
KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining Virtual Event CA USA July, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7998-4
1
PageRank 
References 
Authors
0.63
25
5
Name
Order
Citations
PageRank
Megha Srivastava1112.14
Besmira Nushi212010.98
Ece Kamar357748.11
Shital Shah4474.88
Eric Horvitz594021058.25