In this lecture, we generalize the concepts introduced in the lecture entitled Sequences of random variables and their convergence We no longer consider sequences whose elements are random variables, but we now consider sequences whose generic element is a random vector. The generalization is straightforward, as the terminology and the basic concepts are almost the same used for sequences of random variables.

Let
be a sequence of
real vectors and
a sequence of
random vectors. If the real vector
is a realization
of the random vector
for every
,
then we say that the sequence of real vectors
is a **realization of the sequence** of random vectors
and we
write

Let
be a sample space. Let
be a sequence of random vectors. We say that
is a **sequence of random vectors defined on the sample space**
if all the random vectors
belonging to the sequence
are functions from
to
.

Let
be a sequence of random vectors defined on a sample space
.
We say that
is an **independent sequence of random vectors** (or a sequence
of independent random vectors) if every finite subset of
(i.e. every finite subset of random vectors belonging to the sequence) is a
set of mutually independent random vectors.

Let
be a sequence of random vectors. Denote by
the joint distribution
function of a generic element of the sequence
.
We say that
is a **sequence of identically distributed random vectors** if
any two elements of the sequence have the same joint distribution
function:

Let
be a sequence of random vectors defined on a sample space
.
We say that
is a **sequence of independent and identically distributed random
vectors** (or an IID sequence of random vectors), if
is both a sequence of independent random vectors and a
sequence of identically distributed random vectors.

Let be a sequence of random vectors defined on a sample space . Take a first group of successive terms of the sequence , ..., . Now take a second group of successive terms of the sequence , ..., . The second group is located positions after the first group. Denote the joint distribution function of the first group of terms byand the joint distribution function of the second group of terms by

The sequence
is said to be **stationary** (or **strictly
stationary**) if and only
iffor
any
and for any vector
.

In other words, a sequence is strictly stationary if and only if the two random vectors and have the same distribution (for any , and ). Requiring strict stationarity is weaker than requiring that a sequence be IID (see IID sequences above): if is an IID sequence, then it is also strictly stationary, while the converse is not necessarily true.

Let
be a sequence of random vectors defined on a sample space
.
We say that
is a **covariance stationary sequence** (or **weakly
stationary sequence**)
ifwhere
and
are, of course, integers. Property (1) means that all the random vectors
belonging to the sequence
have the same mean. Property (2) means that
the cross-covariance
between a term
of
the sequence and the term that is located
positions before it
()
is always the same, irrespective of how
has been chosen. In other words,
depends only on
and not on
.
Note also that property (2) implies that all the random vectors in the
sequence have the same covariance matrix (because
):

The definition of mixing sequence of random vectors is a straightforward generalization of the definition of mixing sequence of random variables, which has been discussed in the lecture entitled Sequences of random variables and their convergence. Therefore, we report here the definition of mixing sequence of random vectors without further comments and we refer the reader to the aforementioned lecture for an explanation of the concept of mixing sequence.

Definition
We say that a sequence of random vectors
is **mixing** (or **strongly mixing**) if and only
iffor
any two functions
and
and for any
and
.

As in the previous section, we report here a definition of ergodic sequence of random vectors, which is a straightforward generalization of the definition of ergodic sequence of random variables, and we refer the reader to the lecture entitled Sequences of random variables and their convergence for explanations of the concept of ergodicity.

Denote by the set of all possible sequences of real vectors. When is a sequence of real vectors, denote by the subsequence obtained by dropping the first term of , that is,

We say that a subset
is a **shift invariant** set if and only if
belongs to
whenever
belongs to
.

Definition A set is shift invariant if and only if

Shift invariance is used to define ergodicity.

Definition
A sequence of random vectors
is said to be an **ergodic sequence** if an only
ifwhenever
is a shift invariant set.

Similarly to what happens for sequences of random variables, there are several different notions of convergence also for sequences of random vectors. In particular, all the modes of convergence found for random variables can be generalized to random vectors:

Please cite as:

Taboga, Marco (2021). "Sequences of random vectors and their convergence", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/asymptotic-theory/sequences-of-random-vectors.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.

Featured pages

- Student t distribution
- Normal distribution
- Combinations
- Bernoulli distribution
- Characteristic function
- Point estimation

Explore

Main sections

- Mathematical tools
- Fundamentals of probability
- Probability distributions
- Asymptotic theory
- Fundamentals of statistics
- Glossary

About

Glossary entries

- Probability space
- Continuous random variable
- Continuous mapping theorem
- IID sequence
- Estimator
- Probability mass function

Share

- To enhance your privacy,
- we removed the social buttons,
- but
**don't forget to share**.