In the theory of parameter estimation, an estimator is a function that associates a parameter estimate to each possible sample we can observe.
In a parameter estimation problem, we need to choose a parameter from a parameter space , by using a sample (a set of observations from an unknown probability distribution).
The parameter is our best guess of the true and unknown parameter , which is associated to (or describes) the probability distribution that generated the sample. The parameter is called an estimate of .
When the estimate is produced by using a predefined rule that associates a parameter estimate to each possible sample , we can write as a function of :
The function is called an estimator.
The sample , before being observed, is regarded as a random variable, drawn from the distribution of interest. Therefore, the estimator , being a function of , is also regarded as a random variable.
After the sample is observed, the realization of the estimator is called an estimate of the true parameter . In other words, an estimate is a realization of an estimator.
Commonly found examples of estimators are:
the sample mean, used to estimate the expected value of an unknown distribution;
the sample variance, used to estimate the variance of an unknown distribution;
the OLS estimator, used to estimate the vector of regression coefficients in a linear regression model.
Different estimators of the same parameter are often compared by looking at their mean squared error, which is equal to the expected value of the squared difference between the estimator and the true value of the parameter. For an example of such comparisons, see the lecture on Ridge estimation.
More details about estimators can be found in the lecture entitled Point estimation, which discusses the concept of estimator and the main criteria used to evaluate estimators.
Previous entry: Estimate
Next entry: Event
Most of the learning materials found on this website are now available in a traditional textbook format.