The Laplace expansion is a formula that allows us to express the determinant of a matrix as a linear combination of determinants of smaller matrices, called minors.
The Laplace expansion also allows us to write the inverse of a matrix in terms of its signed minors, called cofactors. The latter are usually collected in a matrix called adjoint matrix.
Let us start by defining minors.
Definition Let be a matrix (with ). Denote by the entry of at the intersection of the -th row and -th column. The minor of is the determinant of the sub-matrix obtained from by deleting its -th row and its -th column.
We now illustrate the definition with an example.
Example Define the matrix Take the entry . The sub-matrix obtained by deleting the first row and the first column isThus, the minor of is The minor of is
A cofactor is a minor whose sign may have been changed depending on the location of the respective matrix entry.
Definition Let be a matrix (with ). Denote by the minor of an entry . The cofactor of is
As an example, the pattern of sign changes of a matrix is
Example Consider the matrix Take the entry . The minor of is and its cofactor is
We are now ready to present the Laplace expansion.
Proposition Let be a matrix (with ). Denote by the cofactor of an entry . Then, for any row , the following row expansion holds:Similarly, for any column , the following column expansion holds:
Let us start by proving the row expansionDenote by the -th row of . We can writewhere is the -th vector of the standard basis of , that is a vector such that its -th entry is equal to 1 and all the other entries are equal to 0. Now, denote by the matrix obtained from by substituting its -th row with :We can write the -th row of as a linear combination as follows:Since the determinant is linear in each row, we have thatNow, the matrix can be transformed into the matrixby performing row interchanges and column interchanges. As a consequence, by the properties of the determinant of elementary matrices, we have thatBy the definition of determinant, we have where: in step we have used the fact that transposition does not change the determinant; in step we have used the fact that the only non-zero entry of the first column of is the first one, so that for all and for ; in step , is the minor of , and, by looking at the structure of above, it is clear that, after excluding the first row and the first column of from the computation of its determinant, we are computing the determinant of a matrix obtained from by deleting its -th row and its -th column. Thus, where is the cofactor of . The proof for column expansions is analogous.
In other words, the determinant can be computed by summing all the entries of an arbitrarily chosen row (column) multiplied by their respective cofactors.
Example Define the matrixWe can use the Laplace expansion along the first column to compute its determinant:
Example Define the matrixWe can use the Laplace expansion along the third row to compute its determinant:
An interesting and useful fact is that while the Laplace expansion giveswe havewhen . In other words, if we multiply the elements of row with the cofactors of a different row and we add them up, we get zero as a result.
Define a matrix whose rows are all equal to the corresponding rows of , except for the -th, which is equal to the -th row of . Thus, has two identical rows and, as a consequence, it is singular and it has zero determinant. Denote by the cofactor of . Then, where: in step we have used the fact that the -th row of is equal to the -th row of ; in step we have used the fact that, although the -th row of is different from the -th row of , we have that because row is canceled when forming the sub-matrices used to compute these cofactors.
The same result holds for columns:when . The proof is analogous to the previous one.
We now define the cofactor matrix (or matrix of cofactors).
Definition Let be a matrix. Denote by the cofactor of (defined above). Then, the matrix such that its -th entry is equal to for every and is called cofactor matrix of .
The adjoint matrix (or adjugate matrix) is the transpose of the matrix of cofactors.
Definition Let be a matrix and its cofactor matrix. The adjoint matrix of , denoted by , is
The following proposition is a direct consequence of the Laplace expansion.
Proposition Let be a matrix and its adjoint. Then,where is the identity matrix.
DefineBy the definition of matrix multiplication, the -th entry of iswhere in step we have used the fact that the adjoint is the transpose of the cofactor matrix. When , the expression in step is the Laplace expansion of and it is therefore equal to . When it is an expansion along the wrong row, and it is therefore equal to . Thus,When we have which is a column expansion. Thus, by the same arguments used previously, we have that
A consequence of the previous proposition is the following.
Proposition Let be a invertible matrix and its adjoint. Then
Since is invertible, . Then, we can rewrite the resultasThus, by the definition of inverse matrix, the matrixis the inverse of .
Below you can find some exercises with explained solutions.
Define the matrix
Compute the determinant of by using Laplace expansion along its third column.
The expansion is
DefineCompute the adjoint of , use it to derive the inverse of , and verify that the matrix thus obtained is indeed the inverse of .
The determinant of isThus, is invertible. Note that the sub-matrices obtained by deleting one row and one column of are . Therefore, the matrix of minors of isand the matrix of cofactors isThe adjoint is obtained by transposing the matrix of cofactors:The inverse can be computed asLet us multiply it by in order to check that it is indeed its inverse:
Please cite as:
Taboga, Marco (2021). "The Laplace expansion, minors, cofactors and adjoints", Lectures on matrix algebra. https://www.statlect.com/matrix-algebra/Laplace-expansion-minors-cofactors-adjoints.
Most of the learning materials found on this website are now available in a traditional textbook format.