Title
Budgeted And Non-Budgeted Causal Bandits
Abstract
Learning good interventions in a causal graph can be modeled as a stochastic multi-armed bandit problem with side-information. First, we study this problem when interventions are more expensive than observations, and a budget is specified. If there are no backdoor paths from the intervenable nodes to the reward node, then we propose an algorithm to minimize simple regret that optimally trades-off observations and interventions based on the cost of interventions. We also propose an algorithm that accounts for the cost of interventions, utilizes causal side-information and minimizes the expected cumulative regret without exceeding the budget. Our algorithm performs better than standard algorithms that do not take side-information into account. Finally, we study the problem of learning best interventions without budget constraint in general graphs and give an algorithm that achieves constant expected cumulative regret in terms of the instance parameters when the parent distribution of the reward variable for each intervention is known. Our results are experimentally validated and compared to the best-known bounds in the current literature.
Year
Venue
DocType
2021
24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS)
Conference
Volume
ISSN
Citations 
130
2640-3498
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Vineet Nair152.11
Vishakha Patil201.35
Gaurav Sinha300.34