# Bayesian inference

Bayesian inference is a way of making statistical inferences in which the statistician assigns subjective probabilities to the distributions that could generate the data. These subjective probabilities form the so-called prior distribution. After the data is observed, Bayes' rule is used to update the prior, that is, to revise the probabilities assigned to the possible data generating distributions. These revised probabilities form the so-called posterior distribution.

## Inference

Remember that the main elements of a statistical inferences problem are the following:

1. we observe some data (a sample);

2. we write the sample as a vector ;

3. we regard as the realization of a random vector ;

4. we do not know the probability distribution of (i.e., the distribution that generated our sample);

5. we define a statistical model, that is, a set of probability distributions that could have generated the data;

6. optionally, we parametrize the model, that is, we put the elements of in correspondence with a set of real vectors called parameters;

7. we use the sample and the statistical model to make a statement (an inference) about the unknown data generating distribution (or about the parameter that corresponds to it).

In Bayesian inference, we assign a subjective distribution to the elements of , and then we use the data to derive a posterior distribution.

In parametric Bayesian inference, the subjective distribution is assigned to the parameters that are put into correspondence with the elements of .

## The likelihood

The first building block of a parametric Bayesian model is the likelihoodwhich is equal to the probability density of when the parameter of the true data generating distribution is equal to .

Note that for the time being we are assuming that and are continuous. Later, we will discuss how to relax this assumption.

Example Suppose the sample is a vector of independent and identically distributed draws from a normal distribution. The mean of the distribution is unknown, while its variance is known. These are the two parameters of the model. The probability density function of a generic draw iswhere we use the notation to highlight the fact that is unknown and the density of depends on this unknown parameter. Because the observations are independent, we can write the likelihood as

## The prior

The second building block of a Bayesian model is the priorwhich is equal to the subjective probability density assigned to the parameter of the data generating distribution.

Example Let us continue the previous example. The statistician believes that the parameter is most likely equal to and that values of very far from are quite unlikely. She expresses this belief about the parameter by assigning to it a normal distribution with mean and variance . So the prior is

## The prior predictive distribution

Having specified the prior and the likelihood, we can derive the marginal density of :where: in step we have performed the so-called marginalization (see the lecture on random vectors); in step we have used the fact that a joint density can be written as the product of a conditional and a marginal density (see the lecture on conditional probability distributions).

The notationis a shorthand for the multiple integralwhere is the dimension of the parameter vector .

The marginal density of , derived in the manner above, is often called the prior predictive distribution. Roughly speaking, it is the probability distribution we assign to the data before observing it.

Example Given the prior and the posterior specified in the previous two examples, it can be proved that the prior predictive distribution is where is an vector of ones, and is the identity matrix. Thus, the prior predictive distribution of is multivariate normal with mean and covariance matrix Thus, under the prior predictive distribution, a draw has mean , variance and covariance with the other draws equal to . The covariance is induced by the fact that the mean parameter , which is stochastic, is the same for all draws.

## The posterior

After observing the data , the statistician can use Bayes' rule to update the prior about the parameter :

The conditional density is called posterior distribution of the parameter.

By using the formula for the marginal density derived above, we obtainwhich makes clear that the posterior depends on the the two distributions specified by the statistician, the prior and the likelihood .

Example In the normal model of the previous examples, it can be proved that the posterior iswhereThus, the posterior distribution of is normal with mean and variance . Note that the posterior mean is the weighted average of the mean of the observed data () and the prior mean . The weights are inversely proportional to the variances of the two means: if the prior variance is high, then the prior mean receives little weight; by the same token, if the variance of the sample mean (which is equal to ) is high, then the sample mean receives little weight and more weight is assigned to the prior. Both the sample mean and the prior mean provide information about . They are combined together, but more weight is given to the signal that has higher precision (smaller variance). Note also that when the sample size becomes very large (goes to infinity), then all the weight is given to the information coming from the sample (the sample mean) and no weight is given to the prior. This is typical of Bayesian inference.

## Proportionality

Given any posterior densitywe can take any function of the data, that does not depend on , and use it to build another functionWe writethat is, is proportional to , in order to highlight that is equal to times a constant (; remember that the data is a constant after being observed).

The posterior can be recovered from as follows:where: in step we have used the fact that does not depend on and, as a consequence, it can be brought out of the integral; in step we have used the fact that the integral of a density (over the whole support) is equal to .

In summary, by multiplying the posterior by any constant (that does not depend on , but may depend on ), we obtain a function proportional to the posterior. If we divide the new function by its integral between and , then we obtain the prior.

## The posterior is proportional to the prior times the likelihood

Note that in the posterior formulathe marginal densitydoes not depend on (because is "integrated out"). Thus, by using the notation introduced in the previous section, we can writethat is, the posterior is proportional to the prior times the likelihood .

Note that both and are known (they are specified by the statistician), so we are saying that the posterior (which we want to compute) is proportional to two known quantities. This proportionality to two known quantities is extremely important: there are various methods that allow to exploit it in order to compute the posterior when (2) cannot be computed and hence (1) cannot be computed directly.

## The posterior predictive distribution

Suppose a second sample of data, denoted by , is observed after observing the sample and updating the prior about the parameter , that is, after computing

Suppose that also the distribution of depends on , but is independent of conditional on :

Then the distribution of given is

The distribution of given , derived in the manner above, is often called the posterior predictive distribution.

Example In the normal model of the previous examples, the prior is updated with draws . Consider a new draw from the same normal distribution. It can be proved that the posterior predictive distribution of is a normal distribution with mean (the posterior mean of ) and variance , where is the posterior variance of .

## Quantities of interest

After having updated the prior, we can make statements (inferences) about the parameter by using its posterior distribution; more in general, we can make statements about quantities that depend on by using the posterior predictive distribution, introduced in the previous section. These quantities, about which we want to make a statement, are often called quantities of interest.

The Bayesian approach provides us with a posterior probability distribution of our quantity of interest. We are free to summarize that distribution in any way that we deem convenient or fits our purposes. For example, we can:

• plot the probability density (or mass) of our quantity of interest;

• report the mean of the distribution (as our best guess of the true value of the quantity of interest) and its standard deviation (as a measure of dispersion of our posterior beliefs);

• report the probability that the quantity of interest (say, a parameter) is equal (or very close) to a certain value which had previously been hypothesized (similarly to what is done in hypothesis testing).

## Integrals

Up to know we have assumed that and are continuous. When they are discrete, there are no substantial changes, but probability density functions are replaced with probability mass functions and integrals are replaced with summations.

For example, if is discrete and is continuous:

• the marginal density of becomeswhere is the probability mass function of , and the summation is over all possible values of

• the formula for the posterior probability mass function of is the same as in the continuous case:

## Factorization

It often happens that we are not able to apply Bayes' rule because we cannot derive the marginal distribution analytically.

However, given the joint distributionif we are able to express it aswhere is a function that depends only on , and is a probability density (or mass) function of (for any fixed ), then

See the lecture on the factorization of probability density functions for a proof of this fact (and a detailed exposition with examples).

## MCMC

There are several Bayesian models that allow to compute the posterior distribution of the parameters analytically. However, this is often not possible. When an analytical solution is not available, the methods that are most commonly employed to derive the posterior distribution numerically are the so called Markov Chain Monte Carlo methods. They are Monte Carlo methods that allow to generate a large sample of correlated draws from the posterior distribution of the parameter vector by simply using the proportionalityThe empirical distribution of the generated sample can then be used to produce plug-in estimates of the quantities of interest.

See the lecture on MCMC methods for more details.

The book

Most of the learning materials found on this website are now available in a traditional textbook format.

Glossary entries
Share