Abstract | ||
---|---|---|
This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N^{2(k+2)} independence tests, even when latent variables and selection bias may be present. We present a modification of the well-known FCI algorithm that implements the method for an independence oracle, and suggest improvements for sample/real-world data versions. It does not contradict any known hardness results, and does not solve an NP-hard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks. |
Year | Venue | DocType |
---|---|---|
2013 | UAI | Journal |
Volume | Citations | PageRank |
abs/1309.6824 | 9 | 0.71 |
References | Authors | |
9 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Tom Claassen | 1 | 61 | 8.76 |
Joris M. Mooij | 2 | 679 | 50.48 |
Tom Heskes | 3 | 1519 | 198.44 |