Skip to main content

Section 4.4 Determinant Properties

We present a number of useful properties of determinants as well as determinants of particular types of matrices including diagonal, permutation, and triangular matrices.

Proof.

The proof is an exercise.

Proof.

The proof is an exercise.
A natural question to ask is what effects do row operations have on the determinant? The next theorem tells us how the determinant changes when we perform a scalar multiple with addition.

Proof.

By Definition 4.1.1 we have that
\begin{align*} \amp \det(\B)\amp \!\!\!\!=\amp \left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ A_{j1}+\l A_{i1}\amp A_{j2}+\l A_{i2}\amp \cdots\amp A_{jn}+\l A_{in}\\ \vdots\amp \amp \ddots\amp \vdots\\ A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\amp \amp \\ \\ \amp \amp =\amp \left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ A_{j1}\amp A_{j2}\amp \cdots\amp A_{jn}\\\vdots\amp \amp \ddots\amp \vdots\\ A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\,+\,\left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ \l A_{i1}\amp \l A_{i2}\amp \cdots\amp \l A_{in}\\\vdots\amp \amp \ddots\amp \vdots\\ A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\amp \amp \\ \\ \amp \amp \qquad\amp\qquad\qquad(\text{by }\knowl{./knowl/xref/detdef.html}{\text{Definition 4.1.1}} \knowl{./knowl/xref/detp3.html}{\text{Property 3}})\amp\amp\\ \\ \amp \amp =\amp \left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ A_{j1}\amp A_{j2}\amp \cdots\amp A_{jn}\\\vdots\amp \amp \ddots\amp \vdots\\ A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\,+\, \l\left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ A_{i1}\amp A_{i2}\amp \cdots\amp A_{in}\\\vdots\amp \amp \ddots\amp \vdots\\ A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\amp \amp \\ \\ \amp \amp \qquad\amp\qquad\qquad(\text{by }\knowl{./knowl/xref/detdef.html}{\text{Definition 4.1.1}} \knowl{./knowl/xref/detp3.html}{\text{Property 3}})\amp\amp\\ \\ \amp \amp =\amp \left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ A_{j1}\amp A_{j2}\amp \cdots\amp A_{jn}\\\vdots\amp \amp \ddots\amp \vdots\\ A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\,+\,\l\cdot0\amp \amp \\ \\ \amp \amp \qquad\amp\qquad\qquad(\text{by }\knowl{./knowl/xref/detthmeqrows0.html}{\text{Theorem 4.4.2}}\text{ since rows i and j are equal})\amp\amp\\ \\ \amp \amp =\amp \det(\A).\amp \amp \end{align*}

Proof.

The proof is an exercise.

Proof.

First let \(\U\in\mathcal{U}^{n \times n}\) with
\begin{equation*} \U=\left(\begin{array}{ccccc}U_{11}\amp\times\amp\times\amp\cdots\amp\times\\0\amp U_{22}\amp\times\amp\cdots\amp\vdots\\ \vdots\amp0\amp\ddots\amp\ddots\amp\vdots\\ \vdots\amp\vdots\amp \ddots\amp\ddots\amp\times\\0\amp0\amp\cdots\amp0\amp U_{nn}\end{array}\right) \end{equation*}
We proceed by induction on the size of the matrix \(n.\)
For the base case we use \(n=2\) and observe that the determinant of \(\left(\begin{array}{cc} U_{11} \amp \times \\ 0 \amp U_{22} \end{array}\right)\) given by the formula in Theorem 4.1.2, \(\det(\U)=U_{11}\cdot U_{22}\) as desired.
The induction hypothesis is that for every upper-triangular matrix \(\U\in\R^{k\times k}\) we have \(\displaystyle{\det(\U)=\prod_{i=1}^{k}U_{ii}}.\)
Now in the induction step we assume the induction hypothesis, fix such a \(k\) and let \(\U'\in\R^{(k+1)\times(k+1)}.\) If \(U'_{11}=0\) then \(\U'\) has a zero row so by Theorem 4.4.4, \(\,\det(\U')=0=\prod_{i=1}^{k+1}U'_{ii}.\) Otherwise assume that \(U_{11}\ne0\) and expand the determinant along column one by Theorem 4.1.10. Since \(\U'_{\ast 1}\) contains only \(0\)’s below the first row, we see that \(\det(\U')=U_{11}'\cdot|\boldsymbol{M_{11}}|.\) But \(\boldsymbol{M_{11}}\) is a \(k\times k\) minor which is evidently upper-triangular so by the induction hypothesis we have \(\displaystyle{|\boldsymbol{M_{11}}|=\prod_{i=2}^{k+1}U_{ii}}.\) Thus \(\displaystyle{\det(\U')=U_{11}'\cdot\prod_{i=2}^{k+1}U_{ii}=\prod_{i=1}^{k+1}U_{ii}}.\) Thus, if we can compute the determinant of every upper-triangular matrix \(\U\in\R^{k\times k}\) by taking the product of the diagonal entries, we can do so for every upper-triangular matrix \(\U\in\R^{(k+1)\times(k+1)}.\)
We conclude by mathematical induction that for all \(\displaystyle{\U\in\mathcal{U}^{n\times n},\,\det(\U)=\prod_{i=1}^nU_{ii}}.\)
The proof for \(n\times n\) lower-triangular matrices proceeds almost identically.

Proof.

The proof follows immediately by Theorem 4.4.5 since every diagonal matrix is, for example, upper-triangular. Note that the \(\prod\) notation mimics the \(\sum\) notation for a sum, but for the product instead. Using \(n\) applications of the special case of Definition 4.1.1 Property 3 (multiplying a row by a scalar) yields that
\begin{align*} \det(\D)\,\,\amp =\quad\left|\begin{array}{cccc}D_{11}\amp 0\amp \cdots\amp 0\\0\amp D_{22}\amp \cdots\amp 0\\\vdots\amp \amp \ddots\amp \vdots\\ 0\amp \cdots\amp \cdots\amp D_{nn}\end{array}\right|\,\,=\,\,D_{11}\left|\begin{array}{cccc}1\amp 0\amp \cdots\amp 0\\0\amp D_{22}\amp \cdots\amp 0\\\vdots\amp \amp \ddots\amp \vdots\\ 0\amp \cdots\amp \cdots\amp D_{nn}\end{array}\right|\\ \amp =\,D_{11}D_{22}\left|\begin{array}{cccc}1\amp 0\amp \cdots\amp 0\\0\amp 1\amp \cdots\amp 0\\\vdots\amp \amp \ddots\amp \vdots\\ 0\amp \cdots\amp \cdots\amp D_{nn}\end{array}\right|\\ \amp \qquad\vdots\\ \amp =\left(\prod_{i=1}^nD_{ii}\right)\,\cdot\,\det(I)\\ \amp =\left(\prod_{i=1}^nD_{ii}\right)\qquad\qquad\qquad\qquad\text{by }\knowl{./knowl/xref/detdef.html}{\text{Definition 4.1.1}} \knowl{./knowl/xref/detp1.html}{\text{Property 1}} \end{align*}

Proof.

The proof is a worksheet exercise.
A very important property of the determinant is the fact that it is multiplicative.

Proof.

For a general proof, we encourage the reader to consult [1]; we show the \(2\times2\) case here.
Let \(\A,\B \in \R^{2\times2}\) with \(\A=\left(\begin{array}{cc}a\amp b\\ c\amp d \end{array}\right)\) and \(\B=\left(\begin{array}{cc}a' \amp b'\\ c' \amp d' \end{array}\right).\) By Theorem 4.1.2 we have
\begin{equation*} \det(\A)=ad-bc\quad\,\text{ and }\quad\det(\B)=a'd'-b'c'. \end{equation*}
The matrix product is \(\A\B=\left(\begin{array}{cc}aa'+bc' \amp ab'+bd'\\ ca'+dc' \amp cb'+dd' \end{array}\right).\)
Computing the determinant of the product, we have
\begin{align*} \det(\A\B)\amp =(aa'+bc')(cb'+dd')-(ab'+bd')(ca'+dc')\\ \amp =aca'b'+ada'd'+bcb'c'+bdc'd'\\ \amp\qquad\qquad -aca'b'-adb'c'-bca'd'-bdc'd'\\ \amp =ada'd'+bcb'c'-adb'c'-bca'd'\\ \amp =(ad-bc)(a'd'-b'c')\\ \amp =\det(\A)\det(\B). \end{align*}

Proof.

The proof is an exercise.

Proof.

The proof is a homework exercise.

Proof.

We prove that the determinant is invariant under transposition by using the LU-factorization. By Theorem 3.13.10 fix permutation matrices \(\P_1,\P_2\in\R^{n\times n}\) and \(\L\in\mathcal{L}^{n\times n},\,\U\in\mathcal{U}^{n\times n}\) for which \(\P_1\A\P_2=\L\U\) and hence \(\A=\P^{-1}_1\L\U\P^{-1}_2.\) By Theorem 2.10.33 and Theorem 3.10.22 we have
\begin{align*} \A^T=\amp\left(\P^{-1}_2\right)^T\U^T\L^T\left(\P^{-1}_1\right)^T\\ \amp\P_2\U^T\L^T\P_1. \end{align*}
Now by Theorem 4.4.8, Corollary 4.4.7 and Theorem 4.4.10 we have
\begin{align*} \det\left(\A^T\right)=\amp\det\left(\P_2\U^T\L^T\P_1\right)\\ =\amp\det\left(\P_2\right)\det\left(\U^T\right)\det(\L^T)\det\left(\P_1\right)\\ =\amp\det\left(\P_2\right)\det\left(\U\right)\det(\L)\det\left(\P_1\right)\\ =\amp\det\left(\P_1\right)\det\left(\L\right)\det(\U)\det\left(\P_2\right)\\ =\amp\det\left(\P_1^T\right)\det\left(\L\right)\det(\U)\det\left(\P_2^T\right)\\ =\amp\det\left(\P^{-1}_1\right)\det\left(\L\right)\det(\U)\det\left(\P^{-1}_2\right)\\ =\amp\det\left(\P^{-1}_1\L\U\P^{-1}_2\right)\\ =\amp\det(\A). \end{align*}

Proof.

The proof is a homework exercise.

Proof.

The proof is a worksheet exercise.
We now have enough details to determine how performing elimination on a matrix affects the determinant.

Proof.

Elimination on \(\A\) can be accomplished by peforming elementary row operations so it is sufficient to determine the effect of each type. By Definition 4.1.1 Property 2 a row swap (\(R_i \leftrightarrow R_j\)) changes the determinant by a sign, and by Property 3 a scalar multiple (\(aR_i \rightarrow R_i) ,a \neq 0\) changes the determinant by a factor of \(a.\) Theorem 4.4.3 shows that the scalar multiple with addition row operation (\(\ell R_i+R_j \rightarrow R_j\)) does not change the determinant at all. Thus if \(\U\) is the reduced echelon form of a matrix \(\A,\) then the determinant of \(\A\) is a (nonzero) scalar multiple of the determinant of \(\U.\) Specifically \(\det(\A)=(-1)^k L \det(\U)\) where \(k\) is the number of row swaps and \(L \neq 0\) is the product of all the scalar multiples used in the scalar multiple row operations.
We can use these theorems to show that the value of the determinant tells us whether the matrix is invertible (nonsingular).

Proof.

Suppose \(\A \in \R^{n \times n}\) is nonsingular. Then elimination on \(\A\) produces an upper-triangular \(\U\) which has a full set of pivots by Theorem 3.10.19. By Theorem 4.4.14 we know that \(\det(\A)=s\cdot \det(\U)\) where \(s \neq 0.\) Since for a square matrix, all pivots in a full set must be on the diagonal, by Theorem 4.4.5 we must have \(\det(\U)\) equal to the product of the pivots, each of which is nonzero by Definition 3.1.8. Thus \(\det(\U)\ne 0.\) It follows that \(\det(\A)=s\cdot\det(\U) \neq 0.\)
Now suppose that \(\A\in \R^{n \times n}\) is singular. Then the reduced echelon form of \(\A,\) call it \(\U,\) will have a zero row, so by Theorem 4.4.4 \(\det(\U)=0.\) By Theorem 4.4.14 we know \(\det(\A)=s\det(\U)\) for some \(s \in\R,\) thus \(\det(\A)=s\cdot 0 = 0.\)

Remark 4.4.16. The determinant by elimination.

The theorems in this section suggest another way to find the determinant of a matrix. If we put \(\A\) in reduced echelon form, \(\U,\) then we know the determinant of \(\U\) is just the product of the elements on the diagonal, and the determinant of \(\A\) is just a scalar multiple of the determinant of \(\U.\) 
Furthermore, if we restrict ourselves to elementary row operations in the form of row swaps or scalar multiplication with addition, then \(\,\det(\A)=(-1)^k \det(\U)\) where \(k\) is the number of row swaps we performed.
To illustrate suppose that \(\A=\left(\begin{array}{cc} a\amp b \\ c \amp d \end{array}\right)\) is a general \(2 \times 2\) matrix. If \(a \neq 0\) then \(\A=\left(\begin{array}{cc} a\amp b \\ c \amp d \end{array}\right) \xrightarrow{-\frac{c}{a}R_1 +R_2 \rightarrow R_2} \left(\begin{array}{cc} a\amp b \\ 0 \amp -\frac{c}{a}b+d \end{array}\right)=\U.\) By Theorem 4.4.5 we know the determinant of \(\U\) is the product of the diagonals, \(\det(u)=a\cdot \left( -\frac{c}{a}b+d \right)=ad-bc\) and by Corollary 4.4.13 we have \(\det(\A)=\det(\U).\) This gives the same formula as in Theorem 4.1.2, \(\det(\A)=ad-bc.\)

Example 4.4.17. \(\boldsymbol{3\times3}\) determinant by elimination.

We find the determinant of \(\A=\lmatrix{rrr} 1 \amp -1 \amp 1 \\ -2 \amp 0 \amp 1\\ 0 \amp 1 \amp 4\rmatrix\) from Example 4.1.6 using this method. We have
\begin{equation*} \begin{array}{lll}\A=\lmatrix{rrr} 1 \amp -1 \amp 1 \\ -2 \amp 0 \amp 1\\ 0 \amp 1 \amp 4\rmatrix \amp\underset{sign=+1}{\xrightarrow{2R_1+R_2 \rightarrow R_2}}\amp\lmatrix{rrr} 1 \amp -1 \amp 1 \\ 0 \amp -2 \amp 3 \\ 0 \amp 1 \amp 4\rmatrix\\ \amp \underset{sign=+1}{\xrightarrow{\frac{1}{2}R_2 +R_3\rightarrow R_3}}\amp\lmatrix{rrr} 1 \amp -1 \amp 1 \\ 0 \amp -2 \amp 3 \\ 0 \amp 0 \amp \frac{11}{2}\rmatrix=\U.\end{array} \end{equation*}
Note that at each stage we keep track of the sign of the determinant, which toggles between \(+1\) and \(-1\) with each row swap. Since we did not do any row swaps we have sign\(=+1\) so by Theorem 4.4.5 and Remark 4.4.16,
\begin{equation*} \det(\A)=(\text{sign})\det(\U)=(1)(-2)\left(\frac{11}{2}\right)=-11. \end{equation*}

Example 4.4.18. \(\boldsymbol{4\times4}\) determinant by elimination.

Find the determinant of \(\A=\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 0 \amp 3 \amp 1 \\ 2 \amp 2 \amp -1 \amp 1\\3 \amp -1 \amp 2 \amp 2 \rmatrix.\)
We put \(\A\) in row echelon form:
\begin{equation*} \begin{array}{lll}\A=\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 0 \amp 3 \amp 1 \\ 2 \amp 2 \amp -1 \amp 1\\3 \amp -1 \amp 2 \amp 2 \rmatrix \amp \underset{sign=+1}{\xrightarrow{-2R_1+R_3 \rightarrow R_3}}\amp\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 0 \amp 3 \amp 1 \\ 0 \amp 4 \amp -9 \amp 1\\3 \amp -1 \amp 2 \amp 2 \rmatrix \\ \amp\underset{sign=+1}{\xrightarrow{-3R_1+R_2 \rightarrow R_2 }}\amp\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 0 \amp 3 \amp 1 \\ 0 \amp 4 \amp -9 \amp 1\\ 0\amp 2\amp -10\amp 2 \rmatrix\\ \amp\underset{sign=-1}{\xrightarrow{R_2\leftrightarrow R_3}}\amp\lmatrix{rrrr}1 \amp - 1 \amp 4 \amp 0 \\ 0 \amp 4 \amp -9 \amp 1\\ 0 \amp 0 \amp 3 \amp 1 \\0\amp 2\amp -10\amp 2 \rmatrix\\ \amp \underset{sign=-1}{\xrightarrow{-\frac{1}{2}R_2+R_4\rightarrow R_4}}\amp\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 4 \amp -9 \amp 1\\ 0 \amp 0 \amp 3 \amp 1 \\0\amp 0\amp -\frac{11}{2} \amp \frac{3}{2} \rmatrix\\ \amp\underset{sign=-1}{\xrightarrow{\frac{11}{6}R_3+R_4\rightarrow R_4}}\amp\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 4 \amp -9 \amp 1\\ 0 \amp 0 \amp 3 \amp 1 \\0\amp 0\amp 0 \amp \frac{10}{3} \rmatrix=\U.\end{array} \end{equation*}
Now by Theorem 4.4.5 and Remark 4.4.16 we obtain
\begin{equation*} \det(\A)=(\text{sign})\det(\U)=(-1)(1)(4)(3)\left(\frac{10}{3}\right)=-40. \end{equation*}