Abstract | ||
---|---|---|
Bayesian neural network (BNN) priors are defined in parameter space, making it hard to encode prior knowledge expressed in function space. We formulate a prior that incorporates functional constraints about what the output can or cannot be in regions of the input space. Output-Constrained BNNs (OC-BNN) represent an interpretable approach of enforcing a range of constraints, fully consistent with the Bayesian framework and amenable to black-box inference. We demonstrate how OC-BNNs improve model robustness and prevent the prediction of infeasible outputs in two real-world applications of healthcare and robotics. |
Year | Venue | DocType |
---|---|---|
2019 | arXiv: Learning | Journal |
Volume | Citations | PageRank |
abs/1905.06287 | 0 | 0.34 |
References | Authors | |
0 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Wanqian Yang | 1 | 0 | 1.35 |
Lars Lorch | 2 | 0 | 1.01 |
Moritz A. Graule | 3 | 7 | 1.60 |
Srivatsan Srinivasan | 4 | 0 | 2.70 |
Anirudh Suresh | 5 | 0 | 0.68 |
Jiayu Yao | 6 | 8 | 3.87 |
Melanie F. Pradier | 7 | 3 | 2.12 |
finale doshivelez | 8 | 574 | 51.99 |