The concept of random vector is a multidimensional generalization of the concept of random variable.
Table of contents
Suppose that we conduct a probabilistic experiment and that the possible
outcomes of the experiment are described by a
sample space
.
A random vector is a vector whose value depends on the outcome of the experiment, as stated by the following definition.
Definition
Let
be a sample space. A random vector
is a function from the sample space
to the set of
-dimensional
real vectors
:
In rigorous probability theory, the function
is also required to be measurable (a concept found in measure theory - see a
more rigorous definition of random vector).
The real vector
associated to a sample point
is called a realization of the random vector.
The set of all possible realizations is called support and is
denoted by
.
Denote by
the probability of an event
.
When dealing with random vectors, the following conventions are used:
If
,
we often write
with the
meaning
If
,
we sometimes use the notation
with the
meaning
In
applied work, it is very commonplace to build statistical models where a
random vector
is defined by directly specifying
(in which case the specification of the sample space
is omitted altogether).
We often write
instead of
,
that is, we omit the dependence on
.
The following example shows how a random vector can be defined on a sample space.
Example
Two coins are tossed. The possible outcomes of each toss can be either tail
()
or head
(
).
The sample space
is
The
four possible outcomes are assigned equal
probabilities:
If
tail
(
)
is the outcome, we win one dollar, if head
(
)
is the outcome we lose one dollar. A 2-dimensional random vector
indicates the amount we win (or lose) on each
toss:
The
probability of winning one dollar on both tosses
is
The
probability of losing one dollar on the second toss
is
This section and the next one deal with discrete and continuous vectors, two kinds of random vectors that have special properties and are often found in applications.
Discrete vectors are defined as follows.
Definition
A random vector
is discrete if and only if
its support
is a countable set;
there is a function
,
called the joint probability mass
function (or joint pmf, or joint probability function) of
,
such that, for any
:
The following notations are used interchangeably to indicate the joint
probability mass
function:
In the second and third notation the
components of
are explicitly indicated.
Example
Suppose
is a
-dimensional
random vector whose components
(
and
)
can take only two values:
or
.
Furthermore, the four possible combinations of
and
are all equally likely.
is an example of a discrete vector. Its support is
Its
probability mass function
is
Continuous vectors are defined as follows.
The following notations are used interchangeably to indicate the joint
probability density
function:
In the second and third notation the
components of the random vector
are
explicitly indicated.
Example
Suppose
is a
-dimensional
random vector whose components
(
and
)
are independent uniform random variables (on the
interval
).
Then,
is an example of a continuous vector. Its support
is
Its
joint probability density function
is
The
probability that the realization of
falls in the rectangle
is
Random vectors, also those that are neither discrete nor continuous, are often described using their joint distribution function.
Definition
Let
be a random vector. The joint distribution function (or joint
df, or joint cumulative distribution function, or joint cdf) of
is a function
such
that
where
the components of
and
are denoted by
and
respectively, for
.
The following notations are used interchangeably to indicate the joint
distribution
function:
In the second and third notation the
components of the random vector
are
explicitly indicated.
Sometimes, we talk about the joint distribution of a random vector, without specifying whether we are referring to
the joint distribution function;
the joint pmf (in the case of discrete random vectors);
the joint pdf (in the case of continuous random vectors).
This ambiguity is legitimate, since
the joint pmf completely determines (and is completely determined by) the joint distribution function of a discrete vector;
the joint pdf completely determines (and is completely determined by) the joint distribution function of a continuous vector.
In the remainder of this lecture, we use the term joint distribution when we are making statements that apply both to the distribution function and to the probability mass (or density) function of a random vector.
The following subsections contain more details about random vectors.
A random matrix is a matrix whose entries are random variables.
It is not necessary to develop a separate theory for random matrices because a random matrix can always be written as a random vector.
Given a
random matrix
,
its vectorization, denoted by
,
is the
random vector obtained by stacking the columns of
on top of each other.
Example
Let
be the following
random
matrix:
The
vectorization of
is the following
random
vector:
When
is a discrete vector, then we say that
is a discrete random matrix and the joint pmf of
is just the joint pmf of
.
By the same token, when
is a continuous vector, then we say that
is a continuous random matrix and the joint pdf of
is just the joint pdf of
.
Let
be the
-th
component of a
-dimensional
random vector
.
The distribution function
of
is called marginal distribution function of
.
If
is discrete, then
is a discrete random
variable and its
probability mass
function
is called marginal probability mass function of
.
If
is continuous, then
is a continuous
random variable and its
probability density
function
is called marginal probability density function of
.
The process of deriving the distribution of a component
of a random vector
from the joint distribution of
is known as marginalization.
Marginalization can also have a broader meaning: it can refer to the act of
deriving the joint distribution of a subset of the set of components of
from the joint distribution of
.
For example, if
is a random vector having three components
(
,
and
),
we can marginalize the joint distribution of
,
and
to find the joint distribution of
and
(in this case we say that
is marginalized out of the joint distribution of
,
and
).
Let
be the
-th
component of a
-dimensional
discrete random vector
.
The marginal probability mass function of
can be derived from the joint probability mass function of
as
follows:
where
the sum is over the
set
In other words, the probability that
is obtained as the sum of the probabilities of all the vectors in
such that their
-th
component is equal to
.
Let
be the
-th
component of a discrete random vector
.
By marginalizing
out of the joint distribution of
,
we obtain the joint distribution of the remaining components of
,
that is, we obtain the joint distribution of the random vector
defined as
follows:
The joint probability mass function of
is computed as
follows:
where
the sum is over the
set
In other words, the joint probability mass function of
can be computed by summing the joint probability mass function of
over all values of
that belong to the support of
.
Let
be the
-th
component of a
-dimensional
continuous random vector
.
The marginal probability density function of
can be derived from the joint probability density function of
as
follows:
In other words, the joint probability density function, evaluated at
,
is integrated with respect to all variables except
(so it is integrated a total of
times).
Let
be the
-th
component of a continuous random vector
.
By marginalizing
out of the joint distribution of
,
we obtain the joint distribution of the remaining components of
,
that is, we get the joint distribution of the random vector
defined as
follows:
The joint probability density function of
is computed as
follows:
In other words, the joint probability density function of
can be computed by integrating the joint probability density function of
with respect to
.
Note that, if
is continuous,
then
Hence, by taking the
-th
order cross-partial derivative with respect to
of both sides of the above equation, we
obtain
We report here a more rigorous definition of random vector by using the formalism of measure theory. This definition is analogous to the measure-theoretic definition given in the lecture on random variables, to which you should refer for a more detailed explanation.
Definition
Let
be a probability space. Let
be the Borel sigma-algebra of
(i.e., the smallest sigma-algebra containing all open hyper-rectangles in
).
A function
such that
for
any
is said to be a random vector on
.
This definition ensures that the probability that the realization
of the random vector
will belong to a set
can be defined as
because
the set
belongs to the sigma-algebra
and, as a consequence, its probability is well-defined.
Some solved exercises on random vectors can be found below.
Let
be a
discrete random vector and denote its components by
and
.
Let the support of
be the set of all
vectors such that their entries belong to the set of the first three natural
numbers, that is,
where
Let the joint probability mass function of
be
Find
.
Trivially, we need to evaluate the joint
probability mass function at the point
,
that
is,
Let
be a
discrete random vector and denote its components by
and
.
Let the support of
be the set of all
vectors such that their entries belong to the set of the first three natural
numbers, that
is,
where
Let the joint probability mass function of
be
Find
.
There are only two possible cases that
give rise to the occurrence
.
These cases
are
and
Therefore,
since these two cases are disjoint events, we can use the additivity of
probability:
Let
be a
discrete random vector and denote its components by
and
.
Let the support of
be
and
its joint probability mass function
be
Derive the marginal probability mass functions of
and
.
The support of
is
We
need to compute the probability of each element of the support of
:
Thus,
the probability mass function of
is
The
support of
is
We
need to compute the probability of each element of the support of
:
Thus,
the probability mass function of
is
Let
be a
continuous random vector and denote its components by
and
.
Let the support of
be
that
is, the set of all
vectors such that the first component belongs to the interval
and the second component belongs to the interval
.
Let the joint probability density function of
be
Compute
.
By the very definition of joint probability density
function:
Let
be a
continuous random vector and denote its components by
and
.
Let the support of
be
that
is, the set of all
vectors such that the first component belongs to the interval
and the second component belongs to the interval
.
Let the joint probability density function of
be
Compute
.
First of all note that
if and only if
.
By using the definition of joint probability density function, we
obtain
Now,
note that, when
,
the inner integral
is
Therefore,
Let
be a
continuous random vector and denote its components by
and
.
Let the support of
be
(i.e., the set of all
-dimensional
vectors with positive entries) and its joint probability density function
be
Derive the marginal probability density functions of
and
.
The support of
is
(recall
that
and
).
We can find the marginal density by integrating the joint density with respect
to
:
When
,
then
and the above integral is trivially equal to
.
Thus, when
,
then
.
When
,
then
but
the first of the two integrals is zero since
when
;
as a
consequence,
So,
by putting pieces together, we get the marginal density function of
:
By
symmetry, the marginal density function of
is
Please cite as:
Taboga, Marco (2021). "Random vectors", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-probability/random-vectors.
Most of the learning materials found on this website are now available in a traditional textbook format.