Search for probability and statistics terms on Statlect
StatLect

Factorization of joint probability density functions

by , PhD

This lecture discusses how to factorize the joint probability density function of two continuous random variables (or random vectors) X and Y into two factors:

  1. the conditional probability density function of X given Y=y;

  2. the marginal probability density function of Y.

Table of Contents

The factorization

The factorization, which has already been discussed in the lecture entitled Conditional probability distributions, is formally stated in the following proposition.

Proposition (factorization) Let [eq1] be a continuous random vector with support $R_{XY}$ and joint probability density function [eq2]. Denote by [eq3] the conditional probability density function of X given Y=y and by [eq4] the marginal probability density function of Y. Then,[eq5]for any x and $y$.

A factorization method

When we know the joint probability density function [eq6] and we need to factorize it into the conditional probability density function [eq3] and the marginal probability density function [eq8], we usually proceed in two steps:

  1. marginalize [eq9] by integrating it with respect to x and obtain the marginal probability density function [eq10];

  2. divide [eq9] by [eq8] and obtain the conditional probability density function [eq13] (of course this step makes sense only when [eq14]).

In some cases, the first step (marginalization) can be difficult to perform. In these cases, it is possible to avoid the marginalization step, by making a guess about the factorization of [eq9] and verifying whether the guess is correct with the help of the following proposition.

Proposition (factorization method) Suppose there are two functions [eq16] and $hleft( y
ight) $ such that

  1. for any x and $y$, the following holds:[eq17]

  2. for any fixed $y$, [eq18], considered as a function of x, is a probability density function

Then, [eq19]

Proof

The proof covers the case in which X and Y are random variables. The proof for the case in which they are random vectors is a straightforward generalization of this proof. The marginal probability density of Y satisfies[eq20]therefore, by property 1 above:[eq21]where the last equality follows from the fact that, for any fixed $y$, [eq22], considered as a function of x, is a probability density function and the integral of a probability density function over R equals 1. Therefore,[eq23]which, in turn, implies[eq24]

Thus, whenever we are given a formula for the joint density function [eq25] and we want to find the marginal and the conditional functions, we have to manipulate the formula and express it as the product of:

  1. a function of x and $y$ that is a probability density function in x for all values of $y$;

  2. a function of $y$ that does not depend on x.

Example Let the joint density function of X and Y be[eq26]The joint density can be factorized as follows:[eq27]where[eq28]and[eq29]Note that [eq30] is a probability density function in x for any fixed $y$ (it is the probability density function of an exponential random variable with parameter $y$). Therefore,[eq31]

How to cite

Please cite as:

Taboga, Marco (2021). "Factorization of joint probability density functions", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-probability/factorization-of-joint-probability-density-functions.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.