In this lecture, we define and explain the concept of uniform convergence in probability, a concept which is often encountered in mathematical statistics, especially in the theory of parameter estimation.
To keep things simple, we first define the concept for sequences of random variables, and we then extend the definition to sequences of random vectors.
Remember that the concept of convergence in probability was defined for sequences of random variables defined on a sample space . In other words, we had a sequence of random variables, and each random variable in the sequence was a function from the sample space (the same space for all variables in the sequence) to the set of real numbers. This can be denoted byto stress the fact that the value taken by the random variable depends on the outcome of the probabilistic experiment .
The concept of uniform convergence in probability enters into play when we have a sequence of random variables , and the value taken by any one of them depends not only on the outcome of the probabilistic experiment , but also on a parameter :The set of possible values of is called parameter space and it is denoted by . It is usually a set of real vectors, but it can also be a more complicated set.
Example For example, in maximum likelihood estimation, the log-likelihood is a function of the sample data (a random vector that depends on ) and of a parameter . By increasing the sample size, we obtain a sequence of log-likelihoods that depend both on and on . See the lecture entitled Maximum likelihood for more details.
For notational convenience, denote by the random variable obtained by keeping the parameter fixed at a value , and by the corresponding sequence.
A possible way to define convergence in probability for the sequence is to require that each sequence , obtained by fixing , converge in probability. This is called pointwise convergence in probability.
Definition Let be a function defined on and and denote by the random variable obtained by keeping the parameter fixed at a value . The sequence is said to be pointwise convergent in probability to if and only if the sequence is convergent in probability to for each .
In statistical applications, pointwise convergence is often not sufficient to obtain desired results, and a stronger concept, that of uniform convergence, is employed. Before introducing this stronger concept, let us present some more details about pointwise convergence, which will be helpful to understand the remainder of this lecture.
Proposition The sequence is pointwise convergent in probability to if and only if one of the following equivalent conditions holds:
for any , and for any , it holds that
for any , for any , and for any , there exists an integer , depending on , and , such thatif .
for any , it holds thatwhere denotes convergence in probability.
Condition 1 is just the usual definition of convergence in probability, applied to the sequence of random variables that is obtained by keeping the parameter fixed at a specific value . Condition 2 is just another way to write the same thing. Note thatis just a sequence of real numbers (indexed by ), and that is its limit. By the very definition of limit, this mean that, for any , there exists an integer such thatif . The integer depends not only on , but also on and , because each choice of and gives rise to a different sequenceCondition 3 is obtained by re-writing Condition 1 in a slightly different way:which is just the definition of convergence in probability of the random variable to .
We are now ready to provide a definition of uniform convergence.
Definition The sequence is uniformly convergent in probability to if and only ifwhere denotes convergence in probability.
In other words, instead of requiring that the distance converge in probability to for each (see Condition 3 above), we require convergence of which is the maximum distance that can be found by ranging over the space of parameters.
To better understand the differences with respect to pointwise convergence, we report a set of equivalent characterizations of uniform convergence.
Proposition The sequence is uniformly convergent in probability to if and only if one of the following equivalent conditions holds:
for any , it holds that
for any , and for any , there exists an integer , depending on and , such that for all if .
it holds that
Condition 3 is just a repetition of the definition of uniform convergence in probability. We report it as Condition 3 in order to preserve the symmetry with the proposition that gives the equivalent conditions for pointwise convergence. Condition 3 states that the sequence of random variablesconverges in probability to . Condition 1 restates the same thing by using the definition of convergence in probability. Condition 2 is obtained from Condition 1 by using the definition of limit of a sequence of real numbers. In fact,is just a sequence of real numbers (indexed by ), and is its limit. By the very definition of limit, this mean that, for any , there exists an integer such thatif . The integer depends not only on , but also on , because each choice of gives rise to a different sequenceAlso note that for any specific , we have thatbecause is always smaller than or equal to its supremum. Therefore, for any , and for and , there exists an integer , depending on and , such thatif .
You should compare these equivalent conditions with those for pointwise convergence. In particular, note that the integer such thatdepends only on and in the definition of uniform convergence, while it depends also on in the definition of pointwise convergence. From this fact, it follows that uniform convergence implies pointwise convergence, but the converse in not true (being able to find an that satisfies the condition for a given does not mean that we are also able to find an that satisfies the condition simultaneously for all possible ).
Extending the concept to random vectors is straightforward.
We now suppose that is a sequence of random vectors that depend both on the outcome of the probabilistic experiment and on the parameter . The notation is the same as before.
Definition The sequence of random vectors is uniformly convergent in probability to if and only ifwhere denotes convergence in probability, and denotes the Euclidean norm of the vector .
In other words, the Euclidean normis a random quantity that depends on the parameter . By taking the supremum over , we obtain another random quantity, that is,that does not depend on . If this random quantity converges in probability to , then we have uniform convergence of the sequence of random vectors.
The equivalent conditions for convergence are the same given for random variables (just replace absolute values with Euclidean norms).
Please cite as:
Taboga, Marco (2021). "Uniform convergence in probability", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/asymptotic-theory/uniform-convergence-in-probability.
Most of the learning materials found on this website are now available in a traditional textbook format.