Title
Additive Scoring Rules for Discrete Sample Spaces
Abstract
AbstractIn this paper, we develop strictly proper scoring rules that may be used to evaluate the accuracy of a sequence of probabilistic forecasts. In practice, when forecasts are submitted for multiple uncertainties, competing forecasts are ranked by their cumulative or average score. Alternatively, one could score the implied joint distributions. We demonstrate that these measures of forecast accuracy disagree under some commonly used rules. Furthermore, and most importantly, we show that forecast rankings can depend on the selected scoring procedure. In other words, under some scoring rules, the relative ranking of probabilistic forecasts does not depend solely on the information content of those forecasts and the observed outcome. Instead, the relative ranking of forecasts is a function of the process by which those forecasts are evaluated. As an alternative, we describe additive and strongly additive strictly proper scoring rules, which have the property that the score for the joint distribution is equal to a sum of scores for the associated marginal and conditional distributions. We give methods for constructing additive rules and demonstrate that the logarithmic score is the only strongly additive rule. Finally, we connect the additive properties of scoring rules with analogous properties for a general class of entropy measures.
Year
DOI
Venue
2020
10.1287/deca.2019.0398
Periodicals
Keywords
DocType
Volume
proper scoring rules, forecast elicitation, generalized entropy
Journal
17
Issue
ISSN
Citations 
2
1545-8490
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Zachary J. Smith100.34
J. Eric Bickel211112.96