Search for probability and statistics terms on Statlect
StatLect
Index > Matrix algebra

Matrix diagonalization

by , PhD

Matrix diagonalization is the process of performing a similarity transformation on a matrix in order to recover a similar matrix that is diagonal (i.e., all its non-diagonal entries are zero).

Once a matrix is diagonalized it becomes very easy to raise it to integer powers.

Not all matrices are diagonalizable. The diagonalizable matrices are those that have no defective eigenvalues (i.e., eigenvalues whose geometric multiplicity is less than their algebraic multiplicity).

Table of Contents

Similarity transformations

Remember that two square matrices A and $B$ are said to be similar if there exists an invertible $K	imes K$ matrix $P$ such that[eq1]

If two matrices are similar, then they have the same rank, trace, determinant and eigenvalues. Not only two similar matrices have the same eigenvalues, but their eigenvalues have the same algebraic and geometric multiplicities.

Diagonalizable matrix

We can now provide a definition of diagonalizable matrix.

Definition Let A be a $K	imes K$ matrix. We say that A is diagonalizable if and only if it is similar to a diagonal matrix.

In other words, when A is diagonalizable, then there exists an invertible matrix $P$ such that[eq2]where $D$ is a diagonal matrix, that is, a matrix whose non-diagonal entries are zero.

Example Define the $2	imes 2$ matrix[eq3]and [eq4]The inverse of $P$ is[eq5]The similarity transformation [eq6]gives the diagonal matrix $D$ as a result. Hence, A is diagonalizable.

Relation to eigenvalues and eigenvectors

We can write the diagonalization $D=P^{-1}AP$ as[eq7]

The k-th column of $AP$ is equal to[eq8]where $P_{ullet k}$ is the k-th column of $P$ (if you are puzzled, revise the lecture on matrix multiplication and linear combinations).

The k-th column of $PD$ is equal to [eq9]where $D_{ullet k}$ is the k-th column of $D$.

In turn, $PD_{ullet k}$ is a linear combination of the columns of $P$ with coefficients taken from the vector $D_{ullet k}$.

Since $D$ is diagonal, the only non-zero entry of $D_{ullet k}$ is $D_{kk}$. Therefore,[eq10]

Thus, we have arrived at the conclusion that[eq11]

The latter equality means that $P_{ullet k}$ is an eigenvector of A associated to the eigenvalue $D_{kk}$.

This is true for $k=1,ldots ,K$. Thus, the diagonal elements of $D$ are the eigenvalues of A and the columns of $P$ are the corresponding eigenvectors.

The matrix $P$ used in the diagonalization must be invertible. Therefore, its columns must be linearly independent. Stated differently, there must be K linearly independent eigenvectors of A.

In the lecture on the linear independence of eigenvectors, we have discussed the fact that, for some $K	imes K$ matrices, called defective matrices, it is not possible to find K linearly independent eigenvectors. A matrix is defective when it has at least one repeated eigenvalue whose geometric multiplicity is strictly less than its algebraic multiplicity (called a defective eigenvalue).

Therefore, defective matrices cannot be diagonalized.

The next proposition summarizes what we have discussed thus far.

Proposition A $K	imes K$ matrix A is diagonalizable if and only if it does not have any defective eigenvalue.

Proof

We have already proved the "only if" part because we have shown above that, if A is diagonalizable, then it possesses K linearly independent eigenvectors, which implies that no eigenvalue is defective. The "if" part is simple. If A possesses K linearly independent eigenvectors, then we can adjoin them to form the full-rank matrix $P$ and we can form a diagonal matrix $D$ whose diagonal elements are equal to the corresponding eigenvalues. Then, by the definition of eigenvalues and eigenvectors, we have that [eq12]and the diagonalization of A follows.

Remember that if all the eigenvalues of A are distinct, then A does not have any defective eigenvalue. Therefore, possessing distinct eigenvalues is a sufficient condition for diagonalizability.

How to diagonalize a matrix

Suppose we are given a matrix A and we are told to diagonalize it. How do we do it?

The answer has already been given in the previous proof, but it is worth repeating.

We provide the answer as a recipe for diagonalization:

  1. Compute the eigenvalues of A.

  2. Check that no eigenvalue is defective. If any eigenvalue is defective, then the matrix cannot be diagonalized. Otherwise, you can go to the next step.

  3. For each eigenvalue, find as many linearly independent eigenvectors as you can (their number is equal to the geometric multiplicity of the eigenvalue).

  4. Adjoin all the eigenvectors so as to form a full-rank matrix $P$.

  5. Build a diagonal matrix $D$ whose diagonal elements are the eigenvalues of A.

  6. The diagonalization is done: $D=P^{-1}AP$.

Importantly, we need to follow the same order when we build $P$ and $D$: if a certain eigenvalue has been put at the intersection of the k-th column and the k-th row of $D$, then its corresponding eigenvector must be placed in the k-th column of $P$.

Example Define the $2	imes 2$ matrix[eq13]The eigenvalues $lambda $ solve the characteristic equation[eq14]Let us compute the determinant[eq15]Thus, there are two eigenvalues $lambda _{1}=-1$ and $lambda _{2}=2$. There are no repeated eigenvalues and, as a consequence, no defective eigenvalues. Therefore, A is diagonalizable. The eigenvectors $x_{1}$ associated to $lambda _{1}$ solve[eq16]Since[eq17]we can choose, for example,[eq18]Moreover,[eq19]so we can choose, as an eigenvector associated to $lambda _{2}$, the following vector:[eq20]Therefore, the diagonal matrix of eigenvalues is[eq21]and the invertible matrix of eigenvectors is[eq22]

The diagonalization is not unique

Provided a matrix A is diagonalizable, there is no unique way to diagonalize it.

For example, we can change the order in which the eigenvalues are put on the diagonal of $D$. Or we can replace a column of $P$ with a scalar multiple of itself (which is another eigenvector associated to the same eigenvalue). If there is a repeated eigenvalue, we can choose a different basis for its eigenspace.

Example For instance, in the previous example, we could have defined[eq23]and[eq24]Another possibility would have been to choose[eq25]and[eq26]

The most important application

The most important application of diagonalization is the computation of matrix powers.

Let $D$ be a diagonal matrix:[eq27]

Then its n-th power can be easily computed by raising its diagonal elements to the n-th power:[eq28]

If a matrix A is diagonalizable, then [eq29]and[eq30]

Thus, all we have to do to raise A to the n-th power is to 1) diagonalize A (if possible); 2) raise the diagonal matrix $D$ to the n-th power, which is very easy to do; 3) pre-multiply the matrix $D^{n	ext{ }}$ thus obtained by $P$ and post-multiply it by $P^{-1}$.

Inverse matrix

Once a matrix has been diagonalized it is straightforward to compute its inverse (if it exists).

In fact, we have that[eq31]where [eq32]

Solved exercises

Below you can find some exercises with explained solutions.

Exercise 1

Suppose that a matrix A can be diagonalized as [eq33]where[eq34] [eq35]Suppose that $a^{2}+b^{2}=1$. Show that[eq36]and compute $A^{4}$.

Solution

First of all, let us check that $P^{-1}=P^{	op }$:[eq37]We can easily compute powers of A:[eq38]

How to cite

Please cite as:

Taboga, Marco (2017). "Matrix diagonalization", Lectures on matrix algebra. https://www.statlect.com/matrix-algebra/matrix-diagonalization.

The book

Most of the learning materials found on this website are now available in a traditional textbook format.