Search for probability and statistics terms on Statlect
StatLect

Uniform convergence in probability

by , PhD

In this lecture, we define and explain the concept of uniform convergence in probability, a concept which is often encountered in mathematical statistics, especially in the theory of parameter estimation.

To keep things simple, we first define the concept for sequences of random variables, and we then extend the definition to sequences of random vectors.

Table of Contents

Uniform convergence in probability for sequences of random variables

Remember that the concept of convergence in probability was defined for sequences of random variables defined on a sample space Omega. In other words, we had a sequence [eq1] of random variables, and each random variable in the sequence was a function from the sample space Omega (the same space for all variables in the sequence) to the set of real numbers. This can be denoted by[eq2]to stress the fact that the value taken by the random variable X_n depends on the outcome of the probabilistic experiment omega in Omega.

The concept of uniform convergence in probability enters into play when we have a sequence of random variables [eq3], and the value taken by any one of them depends not only on the outcome of the probabilistic experiment omega in Omega, but also on a parameter $	heta $:[eq4]The set of possible values of $	heta $ is called parameter space and it is denoted by $Theta $. It is usually a set of real vectors, but it can also be a more complicated set.

Example For example, in maximum likelihood estimation, the log-likelihood is a function of the sample data (a random vector that depends on omega) and of a parameter $	heta $. By increasing the sample size, we obtain a sequence of log-likelihoods that depend both on omega and on $	heta $. See the lecture entitled Maximum likelihood for more details.

For notational convenience, denote by $Q_{n,	heta }$ the random variable obtained by keeping the parameter fixed at a value $	heta $, and by [eq5] the corresponding sequence.

A possible way to define convergence in probability for the sequence [eq6] is to require that each sequence [eq7], obtained by fixing $	heta $, converge in probability. This is called pointwise convergence in probability.

Definition Let [eq8] be a function defined on Omega and $Theta $ and denote by $Q_{	heta }$ the random variable obtained by keeping the parameter fixed at a value $	heta $. The sequence [eq9] is said to be pointwise convergent in probability to $Q$ if and only if the sequence [eq10] is convergent in probability to $Q_{	heta }$ for each $	heta in Theta $.

In statistical applications, pointwise convergence is often not sufficient to obtain desired results, and a stronger concept, that of uniform convergence, is employed. Before introducing this stronger concept, let us present some more details about pointwise convergence, which will be helpful to understand the remainder of this lecture.

Proposition The sequence [eq11] is pointwise convergent in probability to $Q$ if and only if one of the following equivalent conditions holds:

  1. for any $arepsilon >0$, and for any $	heta in Theta $, it holds that[eq12]

  2. for any $arepsilon >0$, for any $	heta in Theta $, and for any $delta >0$, there exists an integer [eq13], depending on epsilon, $delta $ and $	heta $, such that[eq14]if $n>n_{0}$.

  3. for any $	heta in Theta $, it holds that[eq15]where [eq16] denotes convergence in probability.

Proof

Condition 1 is just the usual definition of convergence in probability, applied to the sequence of random variables [eq17] that is obtained by keeping the parameter fixed at a specific value $	heta $. Condition 2 is just another way to write the same thing. Note that[eq18]is just a sequence of real numbers (indexed by n), and that [eq19]is its limit. By the very definition of limit, this mean that, for any $delta >0$, there exists an integer $n_{0}$ such that[eq20]if $n>n_{0}$. The integer $n_{0}$ depends not only on $delta $, but also on $	heta $ and epsilon, because each choice of $	heta $ and epsilon gives rise to a different sequence[eq21]Condition 3 is obtained by re-writing Condition 1 in a slightly different way:[eq22]which is just the definition of convergence in probability of the random variable [eq23] to 0.

We are now ready to provide a definition of uniform convergence.

Definition The sequence [eq24] is uniformly convergent in probability to $Q$ if and only if[eq25]where [eq26] denotes convergence in probability.

In other words, instead of requiring that the distance [eq27] converge in probability to 0 for each $	heta $ (see Condition 3 above), we require convergence of [eq28] which is the maximum distance that can be found by ranging over the space of parameters.

To better understand the differences with respect to pointwise convergence, we report a set of equivalent characterizations of uniform convergence.

Proposition The sequence [eq29] is uniformly convergent in probability to $Q$ if and only if one of the following equivalent conditions holds:

  1. for any $arepsilon >0$, it holds that[eq30]

  2. for any $arepsilon >0$, and for any $delta >0$, there exists an integer [eq31], depending on epsilon and $delta $, such that[eq32] for all $	heta in Theta $ if $n>n_{0}$.

  3. it holds that[eq33]

Proof

Condition 3 is just a repetition of the definition of uniform convergence in probability. We report it as Condition 3 in order to preserve the symmetry with the proposition that gives the equivalent conditions for pointwise convergence. Condition 3 states that the sequence of random variables[eq34]converges in probability to 0. Condition 1 restates the same thing by using the definition of convergence in probability. Condition 2 is obtained from Condition 1 by using the definition of limit of a sequence of real numbers. In fact,[eq35]is just a sequence of real numbers (indexed by n), and [eq36]is its limit. By the very definition of limit, this mean that, for any $delta >0$, there exists an integer $n_{0}$ such that[eq37]if $n>n_{0}$. The integer $n_{0}$ depends not only on $delta $, but also on epsilon, because each choice of epsilon gives rise to a different sequence[eq38]Also note that for any specific $	heta $, we have that[eq39]because [eq40] is always smaller than or equal to its supremum. Therefore, for any $arepsilon >0$, and for and $delta >0$, there exists an integer $n_{0}$, depending on epsilon and $delta $, such that[eq41]if $n>n_{0}$.

You should compare these equivalent conditions with those for pointwise convergence. In particular, note that the integer $n_{0}$ such that[eq42]depends only on epsilon and $delta $ in the definition of uniform convergence, while it depends also on $	heta $ in the definition of pointwise convergence. From this fact, it follows that uniform convergence implies pointwise convergence, but the converse in not true (being able to find an $n_{0}$ that satisfies the condition for a given $	heta $ does not mean that we are also able to find an $n_{0}$ that satisfies the condition simultaneously for all possible $	heta $).

Uniform convergence in probability for sequences of random vectors

Extending the concept to random vectors is straightforward.

We now suppose that [eq43] is a sequence of Kx1 random vectors that depend both on the outcome of the probabilistic experiment omega in Omega and on the parameter $	heta in Theta $. The notation is the same as before.

Definition The sequence of random vectors [eq44] is uniformly convergent in probability to $Q$ if and only if[eq45]where [eq46] denotes convergence in probability, and [eq47] denotes the Euclidean norm of the vector [eq48].

In other words, the Euclidean norm[eq49]is a random quantity that depends on the parameter $	heta $. By taking the supremum over $	heta $, we obtain another random quantity, that is,[eq50]that does not depend on $	heta $. If this random quantity converges in probability to 0, then we have uniform convergence of the sequence of random vectors.

The equivalent conditions for convergence are the same given for random variables (just replace absolute values with Euclidean norms).

How to cite

Please cite as:

Taboga, Marco (2021). "Uniform convergence in probability", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/asymptotic-theory/uniform-convergence-in-probability.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.