Search for probability and statistics terms on Statlect
Index > Fundamentals of statistics

Exponential family of distributions

by , PhD

An exponential family is a parametric family of distributions whose probability density (or mass) functions satisfy certain properties that make them highly tractable from a mathematical viewpoint.

Table of Contents

Parametric families

Let us start by briefly reviewing the definition of a parametric family.

Let $Phi $ be a set of probability distributions.

Put $Phi $ in correspondence with a parameter space [eq1].

If the correspondence is a function that associates one and only one distribution in $Phi $ to each parameter $	heta in Theta $, then $Phi $ is called a parametric family.

Example Let $Phi $ be the set of all normal distributions. Each distribution is characterized by its mean mu (a real number) and its variance sigma^2 (a positive real number). Thus, the set of distributions $Phi $ is put into correspondence with the parameter space [eq2]. A member of the parameter space is a parameter vector [eq3]. Since to each parameter $	heta $ corresponds one and only one normal distribution, the set $Phi $ of all normal distributions is a parametric family.


In what follows, we are going to focus our attention on parametric families of continuous distributions.

However, everything we say applies with straightforward modifications also to families of discrete distributions.


We can now define exponential families.

Definition A parametric family of univariate continuous distributions is said to be an exponential family if and only if the probability density function of any member of the family can be written as[eq4]where:

The key property that characterizes an exponential family is the fact that $	heta $ and x interact only via a dot product (after appropriate transformations $eta $ and $T$).

Log-partition function

Since the integral of a probability density function must be equal to 1, we have:[eq10]

In other words, the function [eq11] is completely determined by the choice of $h$, $eta $ and $T$.

The function [eq11] is called log-partition function or log-normalizer.

Its exponential is a constant of proportionality, as we can write[eq13]where $propto $ is the proportionality symbol.

Sufficient statistic

The vector $Tleft( x
ight) $ is called sufficient statistic because it satisfies a criterion for sufficiency, namely, the density[eq14]is a product of:

Natural parameter

The vector [eq15] is called natural parameter.

When $	heta =eta $, the pdf of X becomes[eq16]where the log-partition function satisfies[eq17]

Base measure

The function $hleft( x
ight) $ is called base measure.

It is so-called because[eq18]in the base case in which $eta =0$.

All the members of the family are perturbations of the base measure, obtained by varying $eta $.


The integral in equation (1) is not guaranteed to be finite.

As a consequence, an exponential family is well-defined only if $hleft( x
ight) $ and $Tleft( x
ight) $ are chosen in such a way that the integral in equation (1) is finite for at least some values of $eta $.


Since [eq19] is strictly positive for finite $T$, $eta $ and A, the density [eq20] is equal to zero only when $hleft( x
ight) $ is.

Therefore, the base measure $hleft( x
ight) $ determines the support of X, which does not depend on $eta $.

How to build an exponential family

To summarize what we have explained above, let us list the main steps needed to build an exponential family:

  1. we choose a base measure $hleft( x
ight) $;

  2. we choose a vector of sufficient statistics $Tleft( x
ight) $ of dimension $L	imes 1$;

  3. we write the $L	imes 1$ natural parameter as a function [eq21] of a Kx1 parameter $	heta $;

  4. we try to find the log-partition function [eq11] by computing the integral[eq23]

  5. if the log-partition function is finite for some values of $eta $, then we have built a family of distributions, called an exponential family, whose densities are of the form[eq24]

This list of steps should clarify the fact that there are infinitely many exponential families: for each choice of the base measure and the vector of sufficient statistics, we obtain a different family.

Joint moment generating function of the sufficient statistics

The joint moment generating function of the sufficient statistic is[eq25]


This is proved as follows[eq26]where in step $rame{A}$ we have used the fact that the integral is equal to 1 because it is the integral of a pdf (by the very definition of the log-partition function [eq27]).

Expected value of the sufficient statistic

Denote the $l$-th entry of the sufficient statistic by [eq28].

Then, its expected value is[eq29]


The joint cumulant generating function (cgf) of $Tleft( X
ight) $ is[eq30]By the properties of the cgf, its first partial derivative with respect to $t_{l}$, evaluated at $t=0$, is equal to [eq31]. Therefore,[eq32]

Covariances between the entries of the sufficient statistic

The covariance between the $l$-th and $m$-th entries of the vector of sufficient statistics is[eq33]


Again, the joint cumulant generating function of the sufficient statistic is[eq34]By the properties of the cgf, its second cross-partial derivative with respect to $t_{l}$ and $t_{m}$, evaluated at $t=0$, is equal to [eq35]. Therefore,[eq36]


Several commonly used families of distributions are exponential. Here are some examples.

Normal distribution

The family of normal distributions with density[eq37]is exponential:[eq38]


We can write the density as follows:[eq39]

Binomial distribution

The family of binomial distributions with probability mass function[eq40]is exponential for fixed n:[eq41]


We can write the probability mass function as follows:[eq42]

Other exponential families

We have already discussed the normal and binomial distributions.

Other important families of distributions previously discussed in these lectures are exponential (prove it as an exercise):

Constant parameters

In the binomial example above we have learned an important fact: there are cases in which a family of distributions is not exponential, but we can derive an exponential family from it by keeping one of the parameters fixed.

In other words, even if a family is not exponential, one of its subsets may be.

Equivalent representations

There are infinitely many equivalent ways to represent the same exponential family.

For example,[eq43]is the same as[eq44]where[eq45]for any constant $c
eq 0$.

Maximum likelihood estimator

Let [eq46] be independently and identically distributed draws from a member of an exponential family having density[eq47]

Then, the maximum likelihood estimator of the natural parameter $eta _{0}$ is the value of $eta $ that solves the equation[eq48]


The likelihood of the sample is[eq49]The log-likelihood is[eq50]The gradient of the log-likelihood with respect to the natural parameter vector $eta $ is[eq51]Therefore, the first order condition for a maximum is[eq52]

There are two interesting things to note in the formula for the maximum likelihood estimator (MLE) of the parameter of an exponential family.

First, the MLE depends only on the sample average of the sufficient statistic, that is, on[eq53]

Regardless of the sample size n, all the information about the parameter provided by the sample is summarized by an $L	imes 1$ vector.

Second, since [eq54], the MLE solves[eq55]where the notation [eq56] highlights that the expected value is computed with respect to a probability density that depends on $eta $.

In other words, the MLE is obtained by matching the sample mean of the sufficient statistic with its population mean [eq57].

Multivariate generalization

The definition of an exponential family of multivariate distributions is a straightforward generalization of the definition given above for univariate distributions.

Definition A parametric family of $N$-dimensional multivariate continuous distributions is said to be an exponential family if and only if the joint probability density function of any member of the family can be written as[eq43]where:

This definition is virtually identical to the previous one. The only difference is that x is no longer a scalar, but it is now an $N	imes 1$ vector.

Also all the main results (about the moments and the mgf of the sufficient statistic, and about maximum likelihood estimation) remain unchanged.

As an exercise, you can check that in all the proofs above it does not matter whether x is a scalar or a vector.

The only thing that changes is that we need to compute a multiple integral, instead of a simple integral, in order to work out the log-partition function.

Examples of multivariate exponential families are those of:

How to cite

Please cite as:

Taboga, Marco (2021). "Exponential family of distributions", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.