In decision theory, a scoring rule provides evaluation metrics for probabilistic predictions or forecasts. While "regular" loss functions (such as mean squared error) assign a goodness-of-fit score to a predicted value and an observed value, scoring rules assign such a score to a predicted probability distribution and an observed value. On the other hand, a scoring function provides a summary measure for the evaluation of point predictions, i.e. one predicts a property or functional
T
(
F
)
{\displaystyle T(F)}
, like the expectation or the median.
Scoring rules are aimed at answering the question "how good is a predicted probability distribution compared to an observation?" An important property of scoring rules is (strict) propriety. In essence, (strictly) proper scoring rules are proven to have the lowest expected score, if the predicted distribution equals the underlying distribution of the target variable. Although this might differ for individual observations, this should result in a minimization of the expected score if the "correct" distributions are predicted.
Scoring rules and scoring functions are often used as "cost functions" or "loss functions" of probabilistic forecasting models. They are evaluated as the empirical mean of a given sample, the "score". Scores of different predictions or models can then be compared to conclude which model is best. For example, consider a model, that predicts (based on an input
x
{\displaystyle x}
) a mean
μ
∈
R
{\displaystyle \mu \in \mathbb {R} }
and standard deviation
σ
∈
R
+
{\displaystyle \sigma \in \mathbb {R} _{+}}
. Together, those variables define a gaussian distribution
N
(
μ
,
σ
2
)
{\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}
, in essence predicting the target variable as a probability distribution. A common interpretation of probabilistic models is that they aim to quantify their own predictive uncertainty. In this example, an observed target variable
y
∈
R
{\displaystyle y\in \mathbb {R} }
is then held compared to the predicted distribution
N
(
μ
,
σ
2
)
{\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}
and assigned a score
L
(
N
(
μ
,
σ
2
)
,
y
)
∈
R
{\displaystyle {\mathcal {L}}({\mathcal {N}}(\mu ,\sigma ^{2}),y)\in \mathbb {R} }
. When training on a scoring rule, it should "teach" a probabilistic model to predict when its uncertainty is low, and when its uncertainty is high, and it should result in calibrated predictions, while minimizing the predictive uncertainty.
Although the example given concerns the probabilistic forecasting of a realvalued target variable, a variety of different scoring rules has been designed, with different target variables in mind. Scoring rules exist for binary and categorical probabilistic classification, as well as univariate and multivariate probabilistic regression.
View More On Wikipedia.org