Abstract | ||
---|---|---|
We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We consider the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from finite data. When many parameters are poorly constrained by data, this puts weight only on boundaries of the parameter manifold. Thus it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. Only in the limit where there is sufficient data to tightly constrain all parameters do we recover Jeffreys prior. |
Year | Venue | Field |
---|---|---|
2017 | arXiv: Data Analysis, Statistics and Probability | Rational ignorance,Jeffreys prior,Mutual information,Artificial intelligence,Effective field theory,Prior probability,Statistics,Manifold,Mathematics,Machine learning,Bayesian probability |
DocType | Volume | Citations |
Journal | abs/1705.01166 | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Henry H. Mattingly | 1 | 0 | 0.34 |
Mark K. Transtrum | 2 | 13 | 2.68 |
Michael C. Abbott | 3 | 0 | 0.34 |
Benjamin B. Machta | 4 | 0 | 0.34 |