Search for probability and statistics terms on Statlect
StatLect

Posterior odds ratio

by , PhD

The posterior odds ratio is the ratio between the posterior probabilities of two events.

In Bayesian inference, it is used to compare different hypotheses or different models.

Table of Contents

Definition

Let $M_{1}$, $M_{2}$ and E be three events.

We can use Bayes' rule to compute the conditional probabilities of $M_{1}$ and $M_{2}$ given E:[eq1]

The ratio [eq2]is called the posterior odds ratio of $M_{1}$ and $M_{2}$.

The event E is often called evidence.

Prior odds

The ratio of prior probabilities[eq3]is called prior odds ratio.

Bayes factor

Finally, the ratio of likelihoods[eq4]is called Bayes factor.

Therefore,[eq5]

Interpretation

The interpretation of the posterior odds ratio [eq6] is pretty simple:

Moreover, the ratio tells us exactly how much more likely $M_{1}$ is than $M_{2}$.

Models

Typically, in Bayesian inference, the evidence E is some observed data, and $M_{1}$ and $M_{2}$ are two statistical models, that is, two sets of probability distributions that could have generated the data.

In other words, the posterior odds ratio between $M_{1}$ and $M_{2}$ quantifies how much more likely $M_{1}$ is than $M_{2}$, based on the prior and the evidence provided by the data.

Note that the two sets of probability distributions can be singletons, that is, the comparison can involve single probability distributions.

The role of marginal likelihoods

Let us denote the data by x (instead of E) and probability density (or mass) functions by p, as we did in previous lectures.

When there are two sets of parametric models, the priors are usually assigned in a hierarchical fashion.

In other words, we specify:

  1. the probabilities of the two models, [eq7] and [eq8];

  2. two parametrized likelihoods, [eq9] and [eq10];

  3. two prior distributions over the parameters of the two models, [eq11] and [eq12].

The posterior odds ratio is [eq13]

The two quantities [eq14] and [eq15] are the so-called prior predictive distributions or marginal likelihoods.

If we are dealing with probability densities, the marginal likelihoods are obtained by computing two integrals:[eq16]

This shows an important fact (see, e.g., O'Hagan 2006): in a comparison of two hierarchical models, the value of the Bayes factor[eq17] depends not only on the evidence provided by the data, but also on the prior distributions of the parameters.

Therefore, a Bayes factor can incorporate both objective (the data) and subjective (the priors) information.

What statisticians do

Although the posterior odds ratio can be used to neatly compare two models by using both prior information and the evidence provided by the data, statisticians often prefer to discuss and report Bayes factors.

Indeed, most of the books and papers that discuss Bayesian model comparisons (see, e.g., the marvellous introduction by Williams et al. 2017) focus almost exclusively on the properties of Bayes factors.

This is probably due to the fact that Bayes factors offer a more objective means of comparing models, by factoring out the more subjective prior odds ratio.

References

O'Hagan, T. (2006) Bayes factors, Significance, 3, 184-186.

Williams, M. N., Baath, R. A. and Philipp, M. C. (2017). Using Bayes factors to test hypotheses in developmental research, Research in Human Development, 14, 321-337.

How to cite

Please cite as:

Taboga, Marco (2021). "Posterior odds ratio", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/posterior-odds-ratio.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.