Abstract | ||
---|---|---|
We address a practical problem ubiquitous in modern marketing campaigns, in which a central agent tries to learn a policy for allocating strategic financial incentives to customers and observes only bandit feedback. In contrast to traditional policy optimization frameworks, we take into account the additional reward structure and budget constraints common in this setting, and develop a new two-step method for solving this constrained counterfactual policy optimization problem. Our method first casts the reward estimation problem as a domain adaptation problem with supplementary structure, and then subsequently uses the estimators for optimizing the policy with constraints. We also establish theoretical error bounds for our estimation procedure and we empirically show that the approach leads to significant improvement on both synthetic and real datasets. |
Year | Venue | DocType |
---|---|---|
2019 | THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | Journal |
Volume | ISSN | Citations |
34 | 2159-5399 | 0 |
PageRank | References | Authors |
0.34 | 24 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Romain Lopez | 1 | 0 | 2.03 |
Chenchen Li | 2 | 10 | 7.02 |
Xiang Yan | 3 | 0 | 0.34 |
Junwu Xiong | 4 | 0 | 0.34 |
Michael I. Jordan | 5 | 31220 | 3640.80 |
Yuan Qi | 6 | 24 | 15.41 |
Le Song | 7 | 2437 | 159.27 |