Search for probability and statistics terms on Statlect
StatLect

Bayesian inference

by , PhD

Bayesian inference is a way of making statistical inferences in which the statistician assigns subjective probabilities to the distributions that could generate the data. These subjective probabilities form the so-called prior distribution.

After the data is observed, Bayes' rule is used to update the prior, that is, to revise the probabilities assigned to the possible data generating distributions. These revised probabilities form the so-called posterior distribution.

This lecture provides an introduction to Bayesian inference and discusses a simple example of inference about the mean of a normal distribution.

Table of Contents

Review of the basics of statistical inference

Remember the main elements of a statistical inference problem:

  1. we observe some data (a sample), that we collect in a vector x;

  2. we regard x as the realization of a random vector X;

  3. we do not know the probability distribution of X (i.e., the distribution that generated our sample);

  4. we define a statistical model, that is, a set $Phi $ of probability distributions that could have generated the data;

  5. optionally, we parametrize the model, that is, we put the elements of $Phi $ in correspondence with a set of real vectors called parameters;

  6. we use the sample and the statistical model to make a statement (an inference) about the unknown data generating distribution (or about the parameter that corresponds to it).

In Bayesian inference, we assign a subjective distribution to the elements of $Phi $, and then we use the data to derive a posterior distribution.

In parametric Bayesian inference, the subjective distribution is assigned to the parameters that are put into correspondence with the elements of $Phi $.

The likelihood

The first building block of a parametric Bayesian model is the likelihood[eq1]

The likelihood is equal to the probability density of x when the parameter of the data generating distribution is equal to $	heta $.

For the time being, we assume that x and $	heta $ are continuous. Later, we will discuss how to relax this assumption.

Example

Suppose that the sample [eq2]is a vector of n independent and identically distributed draws [eq3] from a normal distribution.

The mean mu of the distribution is unknown, while its variance sigma^2 is known. These are the two parameters of the model.

The probability density function of a generic draw $x_{i}$ is[eq4]where we use the notation [eq5] to highlight the fact that mu is unknown and the density of $x_{i}$ depends on this unknown parameter.

Because the observations [eq6] are independent, we can write the likelihood as [eq7]

The prior

The second building block of a Bayesian model is the prior[eq8]

The prior is the subjective probability density assigned to the parameter $	heta $.

Example

Let us continue the previous example.

The statistician believes that the parameter mu is most likely equal to $mu _{0}$ and that values of mu very far from $mu _{0}$ are quite unlikely.

She expresses this belief about the parameter mu by assigning to it a normal distribution with mean $mu _{0}$ and variance $	au ^{2}$.

So, the prior is[eq9]

The prior predictive distribution

After specifying the prior and the likelihood, we can derive the marginal density of x:[eq10]where: in step $rame{A}$ we perform the so-called marginalization (see the lecture on random vectors); in step $rame{B}$ we use the fact that a joint density can be written as the product of a conditional and a marginal density (see the lecture on conditional probability distributions).

The notation[eq11]is a shorthand for the multiple integral[eq12]where K is the dimension of the parameter vector $	heta $.

The marginal density of x, derived in the manner above, is called the prior predictive distribution. Roughly speaking, it is the probability distribution that we assign to the data x before observing it.

Example

Given the prior and the posterior specified in the previous two examples, it can be proved that the prior predictive distribution is [eq13]where i is an $n	imes 1$ vector of ones, and I is the $n	imes n$ identity matrix.

Hence, the prior predictive distribution of x is multivariate normal with mean $imu _{0}$ and covariance matrix [eq14]

Thus, under the prior predictive distribution, a draw $x_{i}$ has mean $mu _{0}$, variance [eq15] and covariance with the other draws equal to $	au ^{2}$.

The covariance is induced by the fact that the mean parameter mu, which is stochastic, is the same for all draws.

The posterior

After observing the data x, we use Bayes' rule to update the prior about the parameter $	heta $:[eq16]

The conditional density [eq17] is called posterior distribution of the parameter.

By using the formula for the marginal density $pleft( x
ight) $ derived above, we obtain[eq18]

Thus, the posterior depends on the two distributions specified by the statistician, the prior [eq19] and the likelihood [eq20].

Example

In the normal model of the previous examples, it can be proved that the posterior is[eq21]where[eq22]

Thus, the posterior distribution of mu is normal with mean $mu _{n}$ and variance $sigma _{n}^{2}$.

The posterior mean $mu _{n}$ is a weighted average of:

The weights are inversely proportional to the variances of the two means:

Both the sample mean and the prior mean provide information about mu. They are combined together, but more weight is given to the signal that has higher precision (smaller variance).

When the sample size n becomes very large (goes to infinity), then all the weight is given to the information coming from the sample (the sample mean) and no weight is given to the prior. This is typical of Bayesian inference.

The posterior predictive distribution

Suppose that a new data sample $y$ is extracted after we have observed x and we have computed the posterior distribution of the parameter[eq24]

Assume that the distribution of $y$ depends on $	heta $, but is independent of x conditional on $	heta $:[eq25]

Then the distribution of $y$ given x is

[eq26]

The distribution of $y$ given x, derived in the manner above, is called the posterior predictive distribution.

Example

In the normal model of the previous examples, the prior is updated with n draws [eq27].

Consider a new draw $x_{n+1}$ from the same normal distribution.

It can be proved that the posterior predictive distribution of $x_{n+1}$ is a normal distribution with mean $mu _{n}$ (the posterior mean of mu) and variance [eq28] , where $sigma _{n}^{2}$ is the posterior variance of mu.

Integrals

Up to know we have assumed that x and $	heta $ are continuous. When they are discrete, there are no substantial changes, but probability density functions are replaced with probability mass functions and integrals are replaced with summations.

For example, if $	heta $ is discrete and x is continuous:

Proportionality

We now take a moment to explain some simple algebra that is extremely important in Bayesian inference.

Given a posterior density[eq32]we can take any function of the data [eq33] that does not depend on $	heta $, and we can use it to build another function[eq34]

Since the data x is considered a constant after being observed, we write[eq35]that is, [eq17] is proportional to [eq37].

The posterior can be recovered from [eq38] as follows:[eq39]where: in step $rame{A}$ we use the fact that $qleft( x
ight) $ does not depend on $	heta $ and, as a consequence, it can be brought out of the integral; in step $rame{B}$ we use the fact that the integral of a density (over the whole support) is equal to 1.

In summary, when we multiply the posterior by a function that does not depend on $	heta $ (but may depend on x), we obtain a function [eq40] proportional to the posterior.

If we divide the new function [eq38] by its integral, then we recover the posterior.

The posterior is proportional to the prior times the likelihood

In the posterior formula[eq42]the marginal density[eq43]does not depend on $	heta $ (because $	heta $ is "integrated out").

Thus, by using the notation introduced in the previous section, we can write[eq44]that is, the posterior [eq17] is proportional to the prior [eq19] times the likelihood [eq20].

Both [eq48] and [eq49] are known because they are specified by the statistician.

Thus, the posterior (which we want to compute) is proportional to the product of two known quantities.

This proportionality to two known quantities is extremely important in Bayesian inference: various methods allow us to exploit it in order to compute the posterior when (2) cannot be calculated and hence (1) cannot be worked out directly.

Factorization

Often, we are not able to apply Bayes' rule [eq50]because we cannot derive the marginal distribution $pleft( x
ight) $ analytically.

However, we are sometimes able to write the joint distribution[eq51]as[eq52]where:

If we can work out this factorization, then[eq54]

See the lecture on the factorization of probability density functions for a proof of this fact.

MCMC

There are several Bayesian models that allow us to compute the posterior distribution of the parameters analytically. However, this is often not possible.

When an analytical solution is not available, Markov Chain Monte Carlo (MCMC) methods are commonly employed to derive the posterior distribution numerically.

MCMC methods are Monte Carlo methods that allow us to generate large samples of correlated draws from the posterior distribution of the parameter vector by simply using the proportionality[eq55]

The empirical distribution of the generated sample can then be used to produce plug-in estimates of the quantities of interest.

See the lecture on MCMC methods for more details.

Quantities of interest

After updating the prior, we can use the posterior distribution of $	heta $ to make statements about the parameter $	heta $ or about quantities that depend on $	heta $.

The quantities about which we make a statement are often called quantities of interest (e.g., Bernardo and Smith 2009) or objects of interest (e.g., Geweke 2005).

The Bayesian approach provides us with a posterior probability distribution of the quantity of interest. We are free to summarize that distribution in any way that we deem convenient.

For example, we can:

More examples of Bayesian inference

Now that you know about the basics of Bayesian inference, you can study two applications in the following lectures:

References

Bernardo, J. M., and Smith, A. F. M. (2009) Bayesian Theory, Wiley.

Geweke, J. (2005) Contemporary Bayesian Econometrics and Statistics, Wiley.

How to cite

Please cite as:

Taboga, Marco (2021). "Bayesian inference", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/Bayesian-inference.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.