Index > Glossary

Unbiased estimator

An estimator of a given parameter is said to be unbiased if its expected value is equal to the true value of the parameter.

In other words, an estimator is unbiased if it produces parameter estimates that are on average correct.

Table of Contents


Remember that in a parameter estimation problem:

The estimate $\widehat{\theta }$ is usually obtained by using a predefined rule (a function) that associates an estimate $\widehat{\theta }$ to each sample $\xi $ that could possibly be observed


The function [eq3] is called an estimator.

Definition An estimator [eq3] is said to be unbiased if and only if[eq5]where the expected value is calculated with respect to the probability distribution of the sample $\xi $.


The following table contains examples of unbiased estimators (with links to lectures where unbiasedness is proved).

Estimator Estimated parameter Lecture where proof can be found
Sample mean Expected value Estimation of the mean
Adjusted sample variance Variance Estimation of the variance
OLS estimator Linear regression coefficients Gauss-Markov theorem
Adjusted sample variance of the OLS residuals Variance of the error of a linear regression Normal linear regression model

Biased estimator

An estimator which is not unbiased is said to be biased.


The bias of an estimator $\widehat{\theta }$ is the expected difference between $\widehat{\theta }$ and the true parameter:[eq6]

Thus, an estimator is unbiased if its bias is equal to to zero, and biased otherwise.

More details

Unbiasedness is discussed in more detail in the lecture entitled Point estimation.

Keep reading the glossary

Previous entry: Unadjusted sample variance

Next entry: Variance formula

The book

Most of the learning materials found on this website are now available in a traditional textbook format.