Estimation methods are general techniques that can be used to derive estimators in a parametric estimation problem.
Let us recall the main elements of a parametric estimation problem, which were explained in detailed in the lecture on point estimation:
we use a sample to make statements about the probability distribution that generated the sample;
the sample is regarded as the realization of a random vector ;
the unknown joint distribution function of , denoted by , is assumed to belong to a set of distribution functions , called statistical model;
the model is put into correspondence with a set of real vectors; is called the parameter space and its elements are called parameters;
we denote by the parameter associated with the unknown data-generating distribution ; if several different parameters are put into correspondence with , can be any one of them;
is called the true parameter;
an estimator is a predefined rule (a function) that associates a parameter estimate to each in the support of ;
the symbol is often used to denote both the estimate and the estimator and the meaning is usually clear from the context.
The aim of an estimation method is to produce a parameter estimate that is as close as possible to the true parameter .
Several widely employed estimators fall within the class of extremum estimators.
An estimator is an extremum estimator if it can be represented as the solution of a maximization problem:where is a function of both the parameter and the sample .
General conditions can be derived for the consistency and asymptotic normality of extremum estimators. We do not discuss them here (see, e.g., Hayashi 2000), but we give some examples of extremum estimation methods and we refer the reader to lectures that describe these examples in a more detailed manner.
In maximum likelihood estimation, we maximize the likelihood of the sample:where:
if is discrete, the likelihood is the joint probability mass function of associated to the distribution that corresponds to the parameter ;
if is absolutely continuous, the likelihood is the joint probability density function of associated to the distribution that corresponds to the parameter .
The vector is called the maximum likelihood estimator of .
The maximum likelihood estimation method is discussed in more detail in the lecture entitled Maximum Likelihood.
In the generalized method of moments (GMM) estimation method, the distribution associated to the parameter satisfies a moment condition:where is a (vector) function and indicates that the expected value is computed using the distribution associated to .
The GMM estimator is obtained aswhere is a measure of the distance of from its expected value of and the estimator is an extremum estimator because
In the least squares estimation method, the sample comprises:
realizations of a random variable , called the dependent variable
realizations , ..., of a random vector , whose components are called independent variables.
It is postulated that there exists a function such that
The least squares estimator is obtained as
The estimator is an extremum estimator because
A special case of the least squares estimator is analyzed in detail in the lecture on the properties of the OLS estimator.
Hayashi, F. (2000) Econometrics, Princeton University Press.
Please cite as:
Taboga, Marco (2021). "Estimation methods", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/estimation-methods.
Most of the learning materials found on this website are now available in a traditional textbook format.