StatlectThe Digital Textbook
Index > Fundamentals of statistics

Hypothesis testing

Hypothesis testing is a method of making statistical inferences.

As we have discussed in the lecture entitled Statistical inference, a statistical inference is a statement about the probability distribution from which a sample $xi $ has been drawn. The sample $xi $ can be regarded as a realization of a random vector $Xi $, whose unknown joint distribution function [eq1] is assumed to belong to a set of distribution functions $Phi $, called statistical model.

In hypothesis testing we make a statement about a model restriction involving a subset [eq2] of the original model. The statement we make is chosen between two possible statements:

  1. reject the restriction [eq3];

  2. do not reject the restriction [eq3].

Roughly speaking, we start from a large set $Phi $ of distributions that might possibly have generated the sample $xi $ and we would like to restrict our attention to a smaller set $Phi _{R}$. In a test of hypothesis, we use the sample $xi $ to decide whether or not to indeed restrict our attention to the smaller set $Phi _{R}$.

If we have a parametric model, we can also carry out parametric tests of hypothesis.

Remember that in a parametric model the set of distribution functions $Phi $ is put into correspondence with a set [eq5] of p-dimensional real vectors called the parameter space. The elements of $Theta $ are called parameters and the true parameter is denoted by $	heta _{0}$. The true parameter is the parameter associated with the unknown distribution function [eq6] from which the sample $xi $ was actually drawn. For simplicity, $	heta _{0}$ is assumed to be unique.

In parametric hypothesis testing we have a restriction [eq7] on the parameter space and we choose one of the following two statements about the restriction:

  1. reject the restriction [eq8];

  2. do not reject the restriction [eq8].

For concreteness, we will focus on parametric hypothesis testing in this lecture, but most of the things we will say apply with straightforward modifications to hypothesis testing in general.

Null hypothesis

The hypothesis that the restriction is true is called null hypothesis and it is usually denoted by $H_{0}$:

[eq10]

Alternative hypothesis

The restriction [eq11] (where $Theta _{R}^{c}$ is the complement of $Theta _{R}$) is often called alternative hypothesis and it is denoted by $H_{1}$:[eq12]

For some authors, "rejecting the null hypothesis $H_{0}$" and "accepting the alternative hypothesis $H_{1}$" are synonyms. For other authors, however, "rejecting the null hypothesis $H_{0}$" does not necessarily imply "accepting the alternative hypothesis $H_{1}$". Although this is mostly a matter of language, it is possible to envision situations in which, after rejecting $H_{0}$, a second test of hypothesis is performed whereby $H_{1}$ becomes the new null hypothesis and it is rejected (this may happen for example if the model is mis-specified). In these situations, if "rejecting the null hypothesis $H_{0}$" and "accepting the alternative hypothesis $H_{1}$" are treated as synonyms, then some confusion arises, because the first test leads to "accept $H_{1}$" and the second test leads to "reject $H_{1}$".

Also note that some statisticians sometimes take into consideration as an alternative hypothesis a set smaller than $Theta _{R}^{c}$. In these cases, the null hypothesis and the alternative hypothesis do not cover all the possibilities contemplated by the parameter space $Theta $.

Types of errors

When we decide whether to reject a restriction or not to reject it, we can incur in two types of errors:

  1. reject the restriction [eq8] when the restriction is true; this is called an error of the first kind or a Type I error;

  2. do not reject the restriction [eq8] when the restriction is false; this is called an error of the second kind or a Type II error.

Critical region

Remember that the sample $xi $ is regarded as a realization of a random vector $Xi $ having support $R_{Xi }$.

A test of hypothesis is usually carried out by explicitly or implicitly subdividing the support $R_{Xi }$ into two disjoint subsets. One of the two subsets, denoted by $C_{Xi }$ is called the critical region (or rejection region) and it is the set of all values of $xi $ for which the null hypothesis is rejected:[eq15]The other subset is just the complement of the critical region:[eq16]and it is, of course, such that[eq17]

Test statistic

The critical region is often implicitly defined in terms of a test statistic and a critical region for the test statistic. A test statistic is a random variable $S$ whose realization is a function of the sample $xi $. In symbols,[eq18]

A critical region for $S$ is a subset [eq19] of the set of real numbers and the test is performed based on the test statistic, as follows:[eq20]

If the complement of the critical region $C_{S}$ is an interval, then its extremes are called critical values of the test.

Power function

The power function of a test of hypothesis is the function that associates the probability of rejecting $H_{0}$ to each parameter $	heta in Theta $. Denoting the critical region by $C_{Xi }$, the power function [eq21] is defined as follows:[eq22]where the notation [eq23] is used to indicate the fact that the probability is calculated using the distribution function [eq24] associated to the parameter $	heta $.

Size of a test

When [eq25], the power function [eq26] tells us the probability of committing a Type I error, i.e. the probability of rejecting the null hypothesis when the null hypothesis is true. The maximum probability of committing a Type I error is, therefore,[eq27]This maximum probability is called the size of the test. The size of the test is also called by some authors the level of significance of the test. However, according to other authors, who assign a slightly different meaning to the term, the level of significance of a test is an upper bound on the size of the test, i.e. a constant $lpha $ that, to the statistician's knowledge, satisfies[eq28]

Criteria to evaluate tests

Tests of hypothesis are most commonly evaluated based on their size and power. An ideal test should have size equal to 0 (i.e., the probability of rejecting the null hypothesis when the null hypothesis is true should be 0) and power equal to 1 when [eq29] (i.e. the probability of rejecting the null hypothesis when the null hypothesis is false should be 1). Of course, such an ideal test is never found in practice, but the best we can hope for is a test with a very small size and a very high probability of rejecting a false hypothesis. Nevertheless, this ideal is routinely used to choose among different tests: for example, when choosing between two tests having the same size, we will always utilize the test that has the higher power when [eq30]; also, when choosing between two tests that have the same power when [eq31], we will always utilize the test that has the smaller size.

Several other criteria, beyond power and size, are used to evaluate tests of hypothesis. We do not discuss them here, but we refer the reader to the very nice exposition in Berger and Casella (2002).

Examples

Examples of hypothesis testing can be found in the following lectures:

  1. Hypothesis tests about the mean (examples of tests of hypothesis about the mean of an unknown distribution);

  2. Hypothesis tests about the variance (examples of tests of hypothesis about the variance of an unknown distribution).

References

Berger, R. L. and G. Casella (2002) "Statistical inference", Duxbury Advanced Series.

The book

Most learning materials found on this website are now available in a traditional textbook format.