Feeds:
Posts
Comments

Archive for April, 2018

A typical problem in natural science and engineering is to select one model among many; the one that best fits the observed data. The following figures show fitting a linear, quadratic, and cubic polynomial to the same test data (the test data was parabolic with added noise on both axes).

The question is: which model fits best the observed data?

A classical approach is to use the reduced chi-square which takes into account the degrees of freedom in each fit. In Bayesian statistics, we can use the Bayes factor to perform decision making. The aim of the Bayes factor is to quantify the support for a model over another. When we want to choose between two models based on observed data, we calculate the evidence of each model, also called the marginalized likelihood. The Bayes factor is the ratio of the two evidences.

K = P(D|M1) / P(D|M2)

A value of K larger than one means support for model M1. There are, however, detailed tables about how to interpret the Bayes factor. Kass and Raftery (1995) presented the following table:

K                           Strength of evidence
1 to 3                    not worth more than a bare mention
3 to 20                  positive
20 to 150             strong
>150                     very strong

in python, it can be done in different ways. One option is to use the PT Sampler in EMCEE. Another one is to use the PyMultinest, an advanced MCMC package which performs importance nested sampling.

In our example, the PT Sampler finds that there are strong support for the parabolic model compared to the cubic model, and at the same time very strong support for the parabolic model against the linear model. Please note that the Bayes factor does not tell which model is correct, it just quantitatively estimates which model is preferred given the observed data.

Advertisements

Read Full Post »