Search for probability and statistics terms on Statlect
StatLect

Mean squared error of an estimator

by , PhD

The mean squared error (MSE) of an estimator is a measure of the expected losses generated by the estimator.

In this page:

We will always assume, unless stated otherwise, that the parameter to be estimated is a vector.

Table of Contents

Estimator

Let $	heta _{0}$ be an unknown parameter to be estimated.

An estimator of $	heta _{0}$, denoted by $widehat{	heta }$, is a pre-defined rule that produces an estimate of $	heta _{0}$ for each possible sample we can observe.

In other words, $widehat{	heta }$ is a random variable, influenced by sampling variability, whose realizations are equal to the estimates of $	heta _{0}$.

Loss function

A loss function is a function [eq1]that quantifies the losses generated by the estimation errors[eq2]

Risk

Since the estimator $widehat{	heta }$ is random, we can compute the expected value of the loss[eq3]which is called statistical risk.

Definition

When a loss function called squared error is used, then the statistical risk is called mean squared error.

Definition Let $widehat{	heta }$ be an estimator of an unknown parameter $	heta _{0}$. When the squared error [eq4]is used as a loss function, then the risk [eq5] is called the mean squared error of the estimator $widehat{	heta }$.

In this definition,[eq6] is the Euclidean norm of a vector, equal to the square root of the sum of the squared entries of the vector.

Scalar case

When $	heta _{0}$ is a scalar, the squared error is[eq7]because the Euclidean norm of a scalar is equal to its absolute value.

Therefore, the MSE becomes[eq8]

Bias variance decomposition

The following decomposition is often used to distinguish between the two main sources of error, called bias and variance.

Proposition The mean squared error of an estimator $widehat{	heta }$ can be written as[eq9]where [eq10] is the trace of the covariance matrix of $widehat{	heta }$ and[eq11]is the bias of the estimator, that is, the expected difference between the estimator and the true value of the parameter.

Proof

Suppose the true parameter and its estimator are column vectors. Then, we can write:[eq12]where: in step $box{A}$ we have expanded the products; in steps $box{B}$, $box{C}$ and $box{D}$ we have used the linearity of the expected value operator; in step $box{E}$ we have used the fact that the trace of a square matrix is equal to the sum of its diagonal elements.

When the parameter $	heta _{0}$ is a scalar, the above formula for the bias-variance decomposition becomes[eq13]

Thus, the mean squared error of an unbiased estimator (an estimator that has zero bias) is equal to the variance of the estimator itself.

More details

In the lecture on point estimation, you can find more details about:

In the lecture on predictive models, you can find a different definition of MSE that applies to predictions (not to parameter estimates).

Keep reading the glossary

Previous entry: Mean

Next entry: Model misspecification

How to cite

Please cite as:

Taboga, Marco (2021). "Mean squared error of an estimator", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/glossary/mean-squared-error.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.