Title
Generalization Analysis For Game-Theoretic Machine Learning
Abstract
For Internet applications like sponsored search, cautions need to be taken when using machine learning to optimize their mechanisms (e.g., auction) since self-interested agents in these applications may change their behaviors (and thus the data distribution) in response to the mechanisms. To tackle this problem, a framework called game-theoretic machine learning (GTML) was recently proposed, which first learns a Markov behavior model to characterize agents behaviors, and then learns the optimal mechanism by simulating agents' behavior changes in response to the mechanism. While GTML has demonstrated practical success, its generalization analysis is challenging because the behavior data are non-i.i.d. and dependent on the mechanism. To address this challenge, first, we decompose the generalization error for GTML into the behavior learning error and the mechanism learning error; second, for the behavior learning error, we obtain novel non-asymptotic error bounds for both parametric and non-parametric behavior learning methods; third, for the mechanism learning error, we derive a uniform convergence bound based on a new concept called nested covering number of the mechanism space and the generalization analysis techniques developed for mixing sequences.
Year
Venue
Keywords
2014
PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE
empirical risk minimization
DocType
Volume
Citations 
Journal
abs/1410.3341
4
PageRank 
References 
Authors
0.39
5
6
Name
Order
Citations
PageRank
Haifang Li140.39
Fei Tian216011.88
Wei Chen316614.55
Tao Qin42384147.25
Zhi-Ming Ma522718.26
Tie-yan Liu64662256.32