This lecture discusses mean-square convergence. We deal first with mean-square convergence of sequences of random variables and then with mean-square convergence of sequences of random vectors.

In the lecture entitled Sequences of random variables and their convergence we have stressed the fact that different concepts of convergence are based on different ways of measuring the distance between two random variables (how "close to each other" two random variables are). The concept of mean-square convergence, or convergence in mean-square, is based on the following intuition: two random variables are "close to each other" if the square of their difference is on average small.

Let be a sequence of random variables defined on a sample space . Let be a random variable. The sequence is said to converge to in mean-square if converges to according to the metric defined as follows:(if you do not understand what it means "to converge according to a metric" go to the lecture entitled Limit of a sequence).

Note that is well-defined only if the expected value on the right hand side exists, which is usually ensured by requiring that and be square integrable.

Intuitively, for a fixed sample point , the squared difference between the two realizations of and provides a measure of how different those two realizations are. The mean squared difference provides a measure of how different those two realizations are on average (as varies). If this mean squared difference becomes smaller and smaller by increasing , then the sequence converges to .

We summarize the concept of mean-square convergence in the following definition.

Definition
Let
be a sequence of square integrable random variables defined on a sample space
.
We say that
is **mean-square convergent **(or** convergent in
mean-square**) if and only if there exists a square integrable random
variable
such that
converges to
,
according to the metric
,
that
is,
is called the **mean-square limit** of the sequence and
convergence is indicated
byor
by

Note that in the definition above, is just the usual criterion for convergence, while indicates that convergence is in the Lp space , because both and have been required to be square integrable.

The following example illustrates the concept of mean-square convergence.

Example Let be a covariance stationary sequence of random variables such that all the random variables in the sequence have the same expected value , the same variance and zero covariance with each other. Define the sample mean as follows:and define a constant random variable . The distance between a generic term of the sequence and isBut is equal to the expected value of becauseTherefore,by the very definition of variance. In turn, the variance of isThus,and But this is just the definition of mean square convergence of to . Therefore, the sequence converges in mean-square to the constant random variable .

The above notion of convergence generalizes to sequences of random vectors in a straightforward manner.

Let be a sequence of random vectors defined on a sample space , where each random vector has dimension . The sequence of random vectors is said to converge to a random vector in mean-square if converges to according to the metric defined as follows:where is the Euclidean norm of the difference between and and the second subscript is used to indicate the individual components of the vectors and .

Of course, is well-defined only if the expected value on the right hand side exists. A sufficient condition for to be well-defined is that all the components of and be square integrable random variables.

Intuitively, for a fixed sample point , the square of the Euclidean norm of the difference between the two realizations of and provides a measure of how different those two realizations are. The mean of the square of the Euclidean norm provides a measure of how different those two realizations are on average (as varies). If becomes smaller and smaller by increasing , then the sequence of random vectors converges to the vector .

The following is a formal definition of mean-square convergence for random vectors.

Definition
Let
be a sequence of random vectors defined on a sample space
,
whose components are square integrable random variables. We say that
is **mean-square convergent **(**or convergent in
mean-square**) if and only if there exists a random vector
with square integrable components such that
converges to
,
according to the metric
,
that
is,
is called the **mean-square limit** of the sequence and
convergence is indicated
byor
by

Note that in the definition above, is just the usual criterion for convergence, while indicates that convergence is in the Lp space , because both and have been required to have square integrable components.

Now, denote by the sequence of the -th components of the vectors . It can be proved that the sequence of random vectors is convergent in mean-square if and only if all the sequences of random variables are convergent in mean-square.

Proposition Let be a sequence of random vectors defined on a sample space , such that their components are square integrable random variables. Denote by the sequence of random variables obtained by taking the -th component of each random vector . The sequence converges in mean-square to the random vector if and only if converges in mean-square to the random variable (the -th component of ) for each .

Below you can find some exercises with explained solutions.

Let be a random variable having a uniform distribution on the interval . In other words, is a continuous random variable with supportand probability density functionConsider a sequence of random variables whose generic term iswhere is the indicator function of the event .

Find the mean-square limit (if it exists) of the sequence .

Solution

When tends to infinity, the interval becomes similar to the interval becauseTherefore, we conjecture that the indicators converge in mean-square to the indicator . But is always equal to , so our conjecture is that the sequence converges in mean square to . To verify our conjecture, we need to verify that

The expected value can be computed as follows.Thus, the sequence converges in mean-square to because

Let be a sequence of discrete random variables. Let the probability mass function of a generic term of the sequence be

Find the mean-square limit (if it exists) of the sequence .

Solution

Note thatTherefore, one would expect that the sequence converges to the constant random variable . However, the sequence does not converge in mean-square to . The distance of a generic term of the sequence from isThus,while, if was convergent, we would have

Does the sequence in the previous exercise converge in probability?

Solution

The sequence converges in probability to the constant random variable because, for any , we have that

The book

Most of the learning materials found on this website are now available in a traditional textbook format.

Featured pages

- Characteristic function
- Hypothesis testing
- Conditional probability
- Exponential distribution
- Independent events
- Wald test

Explore

Main sections

- Mathematical tools
- Fundamentals of probability
- Probability distributions
- Asymptotic theory
- Fundamentals of statistics
- Glossary

About

Glossary entries

Share