Search for probability and statistics terms on Statlect
StatLect
Index > Matrix algebra

Determinant of an elementary matrix

by , PhD

In this lecture we study the properties of the determinants of elementary matrices. The results derived here will then be used in subsequent lectures to prove general properties satisfied by the determinant of any matrix.

Table of Contents

Elementary matrix

Remember that an elementary matrix is a square matrix that has been obtained by performing an elementary row or column operation on an identity matrix.

Furthermore, elementary matrices can be used to perform elementary operations on other matrices: if we perform an elementary row (column) operation on a matrix A, this is the same as performing the given operation on the identity matrix I, so as to get an elementary matrix E, and then pre-multiplying (post-multiplying) A by E.

Also remember that there are three elementary row (column) operations:

Each of these three operations will be analyzed separately in the next sections. We will focus on elementary row operations. The results for column operations are analogous.

Determinant of a row multiplication matrix

Let us start with elementary matrices that allow to perform the multiplication of a row by a constant.

Proposition Let A be a $K	imes K$ matrix. Let E be an elementary matrix obtained by multiplying a row of the $K	imes K$ identity matrix I by a constant $c
eq 0$. Then,[eq1]and[eq2]

Proof

Denote by $P$ the set of all permutations of the first K natural numbers. Denote by $pi _{0}in P$ the permutation in which the K numbers are left in their natural order (sorted in increasing order). Since $pi _{0}$ does not contain any inversion (see the lecture on the sign of a permutation), its parity is even and its sign is [eq3]Then, the determinant of the identity matrix is[eq4]where in step $rame{A}$ we have used the fact that for all permutations $pi $ except $pi _{0}$ the product[eq5]involves at least one off-diagonal element that is equal to zero (remember that all the diagonal elements of I are equal to 1 and all the off-diagonal elements are equal to 0). Let's now consider the elementary matrix E. The only difference with respect to I is that one of the diagonal elements of E is equal to $c$. As a consequence, we have[eq6]Suppose that the first row of E has been multiplied by $c$, so that $EA$ is the matrix obtained by multiplying the first row of A by $c$. We can write the determinant of $EA$ as:[eq7]Therefore,[eq8]The assumption that the row multiplied by $c$ is the first one is without loss of generality (if it is the $j$-th row, then [eq9] needs to be factored out in the above formulae, but the result is the same).

Determinant of a row interchange matrix

Let us now tackle the case of elementary matrices that allow to interchange two rows.

Proposition Let A be a $K	imes K$ matrix. Let E be an elementary matrix obtained by interchanging two rows of the $K	imes K$ identity matrix I. Then,[eq10]and[eq11]

Proof

In order to understand this proof, we need to revise the concept of transposition introduced in the lecture entitled Sign of a permutation. A transposition is the operation of interchanging any two distinct elements of a permutation. A transposition changes the parity of a permutation (it makes an even permutation odd and vice-versa), as well as its sign. Any permutation of the first K natural numbers can be obtained by performing on them a sequence of transpositions. The number of transpositions determines the parity of the permutation (even if the number of transpositions is even, and odd otherwise). Suppose the matrix E has been obtained from the identity matrix I by interchanging rows $k_{1}$ and $k_{2}$ and denote by $Q$ the set of the first K natural numbers except $k_{1}$ and $k_{2}$. For every permutation $pi $ of the first K natural numbers there is a permutation $
ho $ such that[eq12]Since $
ho $ is a transposition of $pi $, we have[eq13]Then,[eq14]where: in step $rame{A}$ we have used the fact that all rows of E are equal to the rows of I, except the $k_{1}$-th and $k_{2}$-th, which are interchanged; in step $rame{B}$ we have used the definition of the permutation $
ho $ given above. The determinant of $EA$, which is obtained by interchanging the $k_{1}$-th and $k_{2}$-th rows of A, is derived in an analogous manner:[eq15]Therefore,[eq16]

Determinant of a row addition matrix

The last case we analyze is that of elementary matrices that allow to add a multiple of one row to another row.

Proposition Let A be a $K	imes K$ matrix. Let E be an elementary matrix obtained by adding a multiple of one row of the $K	imes K$ identity matrix I to another of its rows. Then,[eq17]and[eq18]

Proof

Suppose the matrix E has been obtained from the identity matrix I by adding $lpha $ times the $k_{1}$-th row to the $k_{2}$-th. Denote by F the matrix obtained from the identity matrix by replacing the $k_{2}$-th row with the $k_{1}$-th. Thus, the $k_{1}$-th and the $k_{2}$-th row of F coincide. By the proposition above on row interchanges, the determinant of the matrix $F^{prime }$ obtained by interchanging the $k_{1}$-th and the $k_{2}$-th rows of F is[eq19]But $F=F^{prime }$ because we have interchanged two identical rows, therefore it must be that[eq20]which implies[eq21]Denote by $Q$ the set of the first K natural numbers except $k_{2}$.Then,[eq22]The determinant of $EA$, which is obtained by adding $lpha $ times the $k_{1}$-th row to the $k_{2}$-th row of A, is derived in an analogous manner. Let us denote by F the matrix obtained from A by replacing the $k_{2}$-th row with the $k_{1}$-th. Then,[eq23]Therefore,[eq24]

Determinant of product equals product of determinants

We have proved above that all the three kinds of elementary matrices E satisfy the property[eq25]In other words, the determinant of a product involving an elementary matrix equals the product of the determinants. We will prove in subsequent lectures that this is a more general property that holds for any two square matrices.

Elementary column operations

All the propositions above concern elementary matrices used to perform row operations. The same results apply to column operations, and their proofs are almost identical. This is a consequence of the fact that transposition does not change the determinant of a matrix (a fact that will be proved later on) and column operations on a matrix A can be seen as row operations performed on its transpose $A^{	op }$.

The book

Most of the learning materials found on this website are now available in a traditional textbook format.