Linear regression models belong to the class of conditional models. In a linear regression model, the output variable (also called dependent variable, or regressand) is assumed to be a linear function of the input variables (also called independent variables, or regressors) and of an unobservable error term that adds noise to the linear relationship between inputs and outputs.
This section introduces the main assumptions and the notation and terminology used in dealing with linear regression models.
We assume that the statistician observes a sample of realizations for (i.e., the sample size is equal to ). The output variables, which are scalars, are denoted by , and the associated inputs, which are vectors, are denoted by .
It is postulated that there is a linear relationship between inputs and outputs:where is a vector of constants, called regression coefficients, and is an unobservable error term which encompasses the sources of variability in that are not included in the vector of inputs (for example, measurement errors or input variables that are not observed by the statistician). Note that the relationship is assumed to hold for each , with the same .
Example Suppose we have a sample of individuals for which weight, height and age are observed, and we want to set up a linear regression model to predict weight based on height and age. Then, we could postulate thatwhere , and denote the weight, age and height of the -th individual in the sample, respectively, , and are regression coefficients, and is an error term. This regression equation can be written asby defining , the vector asand the vector as
Denote by the vector of outputsby the matrix of inputsand by the vector of error terms. Then, the linear relationship can be expressed asThe matrix is often called design matrix.
The vector of regressors is usually assumed to contain a constant variable equal to . Without loss of generality, it can be assumed that it is the first entry of , so that the first column of the design matrix is a column of s.
The regression coefficient corresponding to the constant variable is called intercept.
Example Suppose the number of regressors is and the regression includes a constant equal to . Then, we have thatThe coefficient is the intercept of the regression.
Note that when an intercept is included in the regression, then it can be assumed without loss of generality that the expected value of the error term is equal to . For instance, in the previous example, if , then we can writeand define a new regression equationwhere and . Of course, because
Statistical inference about a regression model is usually carried out in the form of point estimation, set estimation and hypothesis testing about:
the vector of regression coefficients ;
the covariance matrix of the vector of error terms (as well as other characteristics of the distribution of ).
Furthermore, the estimates of and of the distribution of are usually employed to make predictions about observations that do not belong to the sample. For example, the inputs of an out-of-sample observation can be used to compute the expected value of its corresponding output .
In order to derive estimators of the vector of regression coefficients and of the covariance of the errors (as well as to establish properties of the estimators such as unbiasedness, consistency and asymptotic variance) it is necessary to make some assumptions about the joint distribution of the matrix of regressors and the vector of error terms . We will discuss such assumptions in the following sections. However, we would like to anticipate the fact that the most commonly used estimator of is the Ordinary Least Squares (OLS) estimator. As we will explain, the OLS estimator is not only computationally convenient, but it enjoys good statistical properties under different sets of assumptions on the joint distribution of and .
The following is a formal definition of the OLS estimator.
Definition An estimator is an OLS estimator of if and only if satisfies
In other words, the OLS estimator is obtained by finding a vector of estimated regression coefficients that minimizes the sum, over all observations, of the squared residuals, where a residualis the difference between the observed output and its predicted value (predicted under the hypothesis that is the vector of regression coefficients).
Note that the closer the predicted values are to the actual output values, the smaller the sum of squared residuals is. Thus, the OLS estimator is the vector of regression coefficients that makes the predicted values as close as possible to the actual output values (the catch here is that the distances between predicted and observed values are measured by their squared differences).
Under the assumption that the design matrix has full rank, the minimization problem above has a a solution that is both unique and explicit.
Proposition If the design matrix has full rank, then the OLS estimator is
First of all, observe that the sum of squared residuals, henceforth indicated by , can be written in matrix form as follows:The first order condition for a minimum is that the gradient of with respect to should be equal to zero:that is,orNow, if has full rank (i.e., rank equal to ), then the matrix is invertible. As a consequence, the first order condition is satisfied byWe now need to check that this is indeed a global minimum. Note that the Hessian matrix, that is, the matrix of second derivatives of , isBut is a positive definite matrix because, for any , we havewhere the last inequality follows from the fact that has full rank (and, as a consequence, implies that cannot be equal to for every ). Thus, is strictly convex in , which implies that is indeed a global minimum.
As already anticipated, the linearity assumptionis not per se sufficient to determine the properties of the OLS estimator of or of any other estimator of and of the characteristics of the distribution of . In order to be able to derive any meaningful property, we need to make further assumptions about the joint distribution of the regressors and the error terms . These further assumptions, together with the linearity assumption, form a linear regression model.
A popular linear regression model is the so called Normal Linear Regression Model (NLRM), in which it is assumed that the vector of errors has a multivariate normal distribution conditional on the design matrix , and that the covariance matrix of is diagonal and all the diagonal entries are equal (in other words, the entries of are mutually independent and have constant variance). Under these hypotheses, the vector of OLS estimators of the regression coefficients has a multivariate normal distribution, and the distributions of several test statistics can be derived analytically. More details about the NLRM can be found in the lecture entitled Normal Linear Regression Model.
While the NLRM has several appealing properties, its assumptions are unrealistic in many practical cases of interest. For this reason, it is often deemed preferable to make weaker assumptions, under which it is possible to prove that the OLS estimators are consistent and asymptotically normal. These assumption are discussed in the lecture entitled Properties of the OLS estimator.
Most of the learning materials found on this website are now available in a traditional textbook format.