Search for probability and statistics terms on Statlect
StatLect

Linear regression model

by , PhD

A linear regression model is a conditional model in which the output variable is a linear function of the input variables and of an unobservable error term that adds noise to the relationship between inputs and outputs.

This lecture introduces the main mathematical assumptions, the matrix notation and the terminology used in linear regression models.

Table of Contents

Dependent and independent variables

We assume that the statistician observes a sample of realizations [eq1] for $i=1,ldots ,N$, where:

Regression coefficients and errors

Inputs and outputs are assumed to have a linear relationship:[eq2]where:

The linear relationship is assumed to hold for each $i=1,ldots ,N$, with the same $eta $.

Example

Let us make an example.

Suppose that we have a sample of individuals for which weight, height and age are observed.

We want to set up a linear regression model to predict weight based on height and age.

Then, we could postulate that[eq3]where:

The regression equation can be written in vector notation as[eq4]by defining [eq5] where $x_{i}$ is a $1	imes 3$ vector and $eta $ is a $3	imes 1$ vector.

SimpleR is StatLect's linear regression tool. You can estimate multiple linear regressions in seconds without coding.

Matrix notation

Denote by $y$ the $N	imes 1$ vector of outputs[eq6]by X the $N	imes K$ matrix of inputs[eq7]and by [eq8]the $N	imes 1$ vector of error terms.

Then, the linear relationship can be expressed in matrix form as[eq9]

The matrix X is called design matrix.

Intercept

The vector of regressors $x_{i}$ usually contains a constant variable equal to 1.

Without loss of generality, we can assume that the constant is the first entry of $x_{i}$.

Therefore, the first column of the design matrix X is a column of 1s.

The regression coefficient corresponding to the constant variable is called intercept.

Example Suppose that the number of regressors is $K=2$ and the regression includes a constant equal to 1. Then, we have that[eq10]The coefficient $eta _{1}$ is the intercept of the regression.

Zero-mean errors

When an intercept is included in the regression, we can assume without loss of generality that the expected value of the error term is equal to 0.

Consider, for instance, the previous example.

If we had [eq11], then we could write[eq12]

We could then define a new regression equation[eq13]where [eq14]

The expected value of the new error would be zero because[eq15]

OLS estimator

Usually, the vector of regression coefficients $eta $ is unknown and needs to be estimated.

The most commonly used estimator of $eta $ is the Ordinary Least Squares (OLS) estimator.

The OLS estimator is not only computationally convenient, but it enjoys good statistical properties under different sets of mathematical assumptions on the joint distribution of X and epsilon.

The following is a formal definition of the OLS estimator.

Definition An estimator $widehat{eta }$ is an OLS estimator of $eta $ if and only if satisfies[eq16]

The OLS estimator is the vector of estimated regression coefficients that minimizes the sum of the squared distances between predicted values $x_{i}b$ and observed values $y_{i}$.

In other words, the OLS estimator makes the predicted values as close as possible to the actual output values.

Residuals

A residual[eq17]is the difference between the observed output $y_{i}$ and its predicted value $x_{i}b$.

Thus, the OLS estimator is the estimator that minimizes the sum of squared residuals.

Formula for the OLS estimator

If the design matrix has full rank, the OLS minimization problem has a solution that is both unique and explicit.

Proposition If the design matrix X has full rank, then the OLS estimator is[eq18]

Proof

First of all, observe that the sum of squared residuals, henceforth indicated by $SSR$, can be written in matrix form as follows:[eq19]The first order condition for a minimum is that the gradient of $SSR$ with respect to $eta $ should be equal to zero:[eq20]that is,[eq21]or[eq22]Now, if X has full rank (i.e., rank equal to K), then the matrix [eq23] is invertible. As a consequence, the first order condition is satisfied by[eq24]We now need to check that this is indeed a global minimum. Note that the Hessian matrix, that is, the matrix of second derivatives of $SSR$, is[eq25]But $X^{	op }X$ is a positive definite matrix because, for any $a
eq 0$, we have[eq26]where the last inequality follows from the fact that X has full rank (and, as a consequence, $a
eq 0$ implies that $x_{i}a$ cannot be equal to 0 for every i). Thus, $SSR$ is strictly convex in $b$, which implies that $b$ is indeed a global minimum.

Models and assumptions

The linearity assumption[eq27]is not per se sufficient to determine the mathematical properties of the OLS estimator of $eta $ (or of any other estimator).

In order to be able to establish any property (e.g., unbiasedness, consistency and asymptotic normality), we need to make further assumptions about the joint distribution of the regressors X and the error terms epsilon.

These further assumptions, together with the linearity assumption, form a linear regression model.

The next section provides an example.

The normal linear regression model

A popular linear regression model is the so called Normal Linear Regression Model (NLRM).

In the NLRM it is assumed that:

Under these hypotheses, the OLS estimator has a multivariate normal distribution. Furthermore, the distributions of several test statistics can be derived analytically.

More details about the NLRM can be found in the lecture on the Normal Linear Regression Model.

More realistic models

The NLRM has several appealing properties, but its assumptions are unrealistic in many practical cases of interest.

For this reason, we often prefer to make weaker assumptions, under which it is possible to prove that the OLS estimators are consistent and asymptotically normal.

These assumption are discussed in the lecture on the properties of the OLS estimator.

Learn about the mathematics of linear regression

If you want to learn more about the mathematics of linear regression, you can read the following lectures:

How to cite

Please cite as:

Taboga, Marco (2021). "Linear regression model", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/linear-regression.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.