Search for probability and statistics terms on Statlect
StatLect

Probit classification model (or probit regression)

by , PhD

This lecture deals with the probit model, a binary classification model in which the conditional probability of one of the two possible realizations of the output variable is equal to a linear combination of the inputs, transformed by the cumulative distribution function of the standard normal distribution.

Table of Contents

Model specification

Assume that a sample of data [eq1], for $i=1,ldots ,N $, is observed, where:

The conditional probability that the output $y_{i}$ is equal to 1, given the inputs $x_{i}$, is assumed to be[eq2]where $Fleft( t
ight) $ is the cumulative distribution function of the standard normal distribution and $eta $ is a Kx1 vector of coefficients.

Moreover, if $y_{i}$ is not equal to 1, then it is equal to 0 (no other values are possible), and the probabilities of the two values need to sum up to 1, so that[eq3]

Interpretation

The interpretation of the probit model is very similar to that of the logit model. You are advised to read the comments about the interpretation of the latter in the lecture entitled Logistic classification model.

The probit model as a latent variable model

As in the case of the logit, also the probit model can be written as a latent variable model.

Define a latent variable [eq4]where $arepsilon _{i}$ is a random error term having a standard normal distribution. The output $y_{i}$ is linked to the latent variable by the following relationship:[eq5]We have that[eq6]so that the latent variable model specified by (1) and (2) assigns to the inputs the same conditional distributions assigned by the probit model.

Estimation by maximum likelihood

The vector of coefficients $eta $ can be estimated by maximum likelihood (ML).

We assume that the observations [eq1] in the sample are independently and identically distributed (IID) and that he $N	imes K$ matrix of inputs defined by[eq8]has full rank.

In a separate lecture (ML estimation of the probit model), we demonstrate that the ML estimator $widehat{eta }$ can be found (if it exists) with the following iterative procedure.

Starting from an initial guess of the solution [eq9] (e.g., [eq10]), we generate a sequence of guesses[eq11]

$W_{t-1}$ is an $N	imes N$ diagonal matrix and $lambda _{t-1}$ is an $N	imes 1$ vector. They are calculated as follows:

The iterative procedure stops when numerical convergence is achieved, that is, when the difference between two successive guesses [eq16] and [eq17] is so small that we can ignore it.

If $T$ is the last step of the iterative procedure, then the maximum likelihood estimator is[eq18]and its asymptotic covariance matrix is[eq19]where $W=W_{T}$.

As a consequence, the distribution of $widehat{eta }$ can be approximated by a normal distribution with mean equal to the true parameter and covariance matrix [eq20].

Hypothesis testing

When we estimate the coefficients of a probit classification model by maximum likelihood (see previous section), we can carry out hypothesis tests based on maximum likelihood procedures (e.g., Wald, Likelihood Ratio, Lagrange Multiplier) to test a null hypothesis about the coefficients.

Furthermore, we can set up a z test to test a restriction on a single coefficient:[eq21]where $eta _{k}$ is the k-th entry of the vector of coefficients $eta $ and $qin U{211d} $.

The test statistic is[eq22]where [eq23] is the k-th entry of $widehat{eta }$ and [eq24] is the k-th entry on the diagonal of the matrix [eq25].

Since $widehat{eta }$ is asymptotically normal and [eq26] is a consistent estimator of the asymptotic covariance matrix of $widehat{eta }$, $z$ converges in distribution to a standard normal distribution (the proof is identical to the proof we have provided for the asymptotic normality of the z statistic in the lecture on the logit model).

By approximating the distribution of $z$ with its asymptotic one (a standard normal), we can derive critical values (depending on the desired size) and carry out the test.

How to cite

Please cite as:

Taboga, Marco (2021). "Probit classification model (or probit regression)", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/probit-classification-model.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.