The likelihood ratio (LR) test is a test of hypothesis in which two different maximum likelihood estimates of a parameter are compared in order to decide whether to reject or not to reject a restriction on the parameter.
Before going through this lecture, you are advised to get acquainted with the basics of hypothesis testing in a maximum likelihood framework (see the introductory lecture entitled Maximum likelihood - Hypothesis testing).
The likelihood ratio test is used to verify null hypotheses that can be written in the form:where is an unknown parameter belonging to a parameter space , and is a vector valued function ().
The above formulation of a null hypothesis is quite general, as many common parameter restrictions can be written in the form (see the aforementioned introductory lecture).
The likelihood ratio test is based on two different ML estimates of the parameter .
One estimate, called unrestricted estimate and denoted by , is obtained from the solution of the unconstrained maximum likelihood problemwhere is the sample of observed data, and is the likelihood function.
The other estimate, called restricted estimate and denoted by , is obtained from the solution of the constrained maximum likelihood problemwhere is the set of parameters that satisfy the restriction being tested.
The test statistic, called likelihood ratio statistic, iswhere is the sample size.
In order to derive the asymptotic properties of the statistic , we are going to assume that:
both the restricted and the unrestricted estimator are asymptotically normal and satisfy the set of sufficient conditions for asymptotic normality given in the lecture on maximum likelihood estimation;
the components of are continuously differentiable on with respect to all components of ;
the matrix of the partial derivatives of the components of with respect to the components of , denoted by and called the Jacobian of , has rank .
Given the above assumptions, the following result can be proved.
Proposition If the null hypothesis is true and some technical conditions are satisfied (see above), the likelihood ratio statistic converges in distribution to a Chi-square distribution with degrees of freedom.
By the Mean Value Theorem, the second order expansion of can be written as where is the Hessian matrix (a matrix of second partial derivatives) and is an intermediate point (to be precise, there are intermediate points, one for each row of the Hessian). Because the gradient is zero at an unconstrained maximum, we have thatand, as a consequence, Thus, the likelihood ratio statistic can be written asBy results that can be found in the proof of convergence of the score test statistic, we have thatwhere is another intermediate point, and thatwhere is the Jacobian of and is a Lagrange multiplierNote that the expression for the Lagrange multiplier includes a third intermediate point . By putting all these things together, we obtainwhere we have definedIf we also definethe test statistic can be written aswhere we have used the fact that is symmetric and we have defined
Under the null hypothesis both and converge in probability to . As a consequence, also , and converge in probability to , because their are strictly comprised between the entries of and . Furthermore, and converge in probability to , the asymptotic covariance matrix of . Therefore, by the continuous mapping theorem, we have the following resultsThus, we can write the likelihood ratio statistic as a sequence of quadratic forms whereand As we have proved in the lecture on the Wald test, such a sequence of quadratic forms converges in distribution to a Chi-square random variable with degrees of freedom.
Note that the likelihood ratio statistic, unlike the statistics used in the Wald test and in the score test, depends only on parameter estimates and not on their asymptotic covariance matrix. This can be an advantage if the latter is difficult to estimate.
In the likelihood ratio test, the null hypothesis is rejected ifwhere is a pre-specified critical value.
The size of the test can be approximated by its asymptotic value
where is the cumulative distribution function of a Chi-square random variable having degrees of freedom.
By appropriately choosing , it is possible to achieve a pre-specified size, as follows:
The next example illustrates how the likelihood ratio statistic can be used.
Example
Let
,
that is, the parameter space is the set of all
-dimensional
real vectors. Denote the three entries of the true parameter
by
,
and
.
The restrictions to be tested
areso
that
is a function
defined
byWe
have that
and the Jacobian of
isIt
has rank
because its two rows are linearly independent. Suppose we have obtained the
constrained estimate
and unconstrained one
and that we know the value of the log-likelihoods in correspondence of the two
estimates:These
two values are used to compute the value of the test statistic:
According
to the rank calculations above, the statistic has a Chi-square distribution
with
degrees of freedom. Let us fix the size of the test at
.
Then, the critical value
iswhere
is the distribution function of a Chi-square random variable with
degrees of freedom and
can be calculated with any statistical software (e.g., in MATLAB, with the
command chi2inv(0.90,2)
). Thus, the test statistic is
below the critical
valueAs
a consequence, the null hypothesis cannot be rejected.
Most of the learning materials found on this website are now available in a traditional textbook format.