Search for probability and statistics terms on Statlect

Range null-space decomposition

by , PhD

The range null-space decomposition is the representation of a vector space as the direct sum of the range and the null space of a certain power of a given matrix.

Table of Contents

Range and null space

Let us revise the concepts of range and null space of a matrix, which are discussed in detail in the lecture on the four fundamental subspaces.

Suppose that A is a $K	imes K$ matrix and $S$ is the space of all Kx1 column vectors.

The matrix A defines a linear map [eq1]such that [eq2]for any $sin S$.

The range (or column space) of A is the subspace[eq3]that is, the set of all values taken by the map as its argument varies over the domain $S$.

The null space of A is the subspace[eq4]formed by all the elements of $S$ that are mapped into the zero vector.

Matrix powers

Let us consider the powers $A^{m}$. In the lecture on matrix powers we have proved that [eq5] and its dimension decrease as we increase the power $m$, while [eq6] and its dimension simultaneously increase. However, at a certain point the two subspaces stabilize: there exists an integer $kleq K$ such that[eq7]for any integer $jgeq 1$.

The smallest non-negative integer k for which the above equalities hold is often called the index of A, although sometimes the term has different meanings in the linear algebra literature.

Direct sums

The range null-space decomposition, which we are going to prove and discuss below, asserts that[eq8]where k is the index of A and the $oplus $ symbol denotes a direct sum.

Remember that the sum of the two subspaces[eq9]is direct (and we use the $oplus $ instead of the $+$ symbol to denote it) if and only if[eq10]in which case the representation of any vector [eq11] as a sum of a vector [eq12] and a vector [eq13] is unique.

Furthermore, when the direct sum is equal to the whole space $S$, as in the case of the range null-space decomposition, we say that the two spaces [eq14] and [eq15] are complementary.


The first result we are going to prove is that the intersection of [eq16] and [eq15] contains only the zero vector.

Proposition Let A be a $K	imes K$ matrix. Let k be any non-negative integer such that [eq18] and [eq19]. Then,[eq20]


Let $S$ be the space of all Kx1 column vectors. Let [eq21]. Since [eq22] we have[eq23]Moreover, since [eq24], there exists $tin S$ such that [eq25]We can pre-multiply both sides of the equation by $A^{k}$, so as to get[eq26]Thus, [eq27]. But [eq28] because of the stabilization of null spaces. As a consequence, [eq29], which implies that[eq30]Since $s$ was chosen arbitrarily, we can conclude that the intersection [eq31] contains only the zero vector.

The decomposition

After having revised all the concepts involved in the range null-space decomposition, we are now ready to state it as a proposition.

Proposition Let $S$ be the space of all Kx1 column vectors. Let A be a $K	imes K$ matrix. Let k be any non-negative integer such that [eq32] and [eq33]. Then,[eq34]


We are going to use the following result, which we proved in the lecture on complementary subspaces: [eq35] and [eq15] are complementary if and only if $Bcup C$ is a basis for $S$ whenever $B$ is a basis for [eq35] and $C$ is a basis for [eq38]. Arbitrarily choose two such bases [eq39] and [eq40]. Suppose that $Bcup C$ is a set of linearly dependent vectors. Then there are scalars [eq41], not all equal to zero, such that[eq42]or[eq43]where the last inequality descends from the fact that $B$ and $C$ are sets of linearly independent vectors and the scalars are not all zero. But[eq44]and[eq45]This is impossible because, as demonstrated above, [eq46] contains only the zero vector. Thus, we have proved by contradiction that $Bcup C$ is a set of linearly independent vectors. Moreover, by the rank-nullity theorem, $K=n+m$. Thus, $Bcup C$ is a set of K linearly independent vectors. In other words, it is a basis for $S$. Since $B$ and $C$ were chosen arbitrarily among the bases of [eq35] and [eq48] respectively, it descends that [eq49] and [eq15] are complementary subspaces, that is, [eq51]

When the decomposition is trivial

When the matrix A is full-rank, the range null-space decomposition is trivial: since the product of two full-rank matrices is full-rank, $A^{k}$ is full-rank for any non-negative integer k. In other words, the columns of $A^{k}$ are always linearly independent, and, as a consequence, they span all of $S$. Thus,[eq52]for any k, which makes the decomposition trivial.

Note that $A^{0}=I$ (where I is the identity matrix) and [eq53]. As a consequence, the index of a full-rank matrix is 0.

The decomposition for operators

Everything that we prove for square matrices can be applied to linear operators on finite-dimensional spaces. In fact, in the finite-dimensional case, each square matrix defines an operator and each operator is associated to a square matrix.

Remember that raising a matrix to the k-th integer power is the same as composing its associated operator k times with itself.

Thus, we can state the range null-space decomposition for linear operators as follows.

Proposition Let $S$ be a finite-dimensional vector space. Let $f:S
ightarrow S$ be a linear operator. Then, there exists a non-negative integer k such that[eq54]

Solved exercises

Below you can find some exercises with explained solutions.

Exercise 1

Find the index of [eq55]and determine the range null-space decomposition of A.


We have[eq56]The second power of A is[eq57]from which we can clearly see that[eq58]Thus, the index of A is 1 and[eq59]

How to cite

Please cite as:

Taboga, Marco (2021). "Range null-space decomposition", Lectures on matrix algebra.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.