Section 4.4 Determinant Properties
Proof.
Theorem 4.4.2. Determinants of matrices with two equal rows.
When two rows of a matrix \(\A\in\R^{n\times n}\) are equal, \(\det(\A)=0.\)Proof.
A natural question to ask is what effects do row operations have on the determinant? The next theorem tells us how the determinant changes when we perform a scalar multiple with addition.
Theorem 4.4.3. Scalar multiplication with addition leaves the determinant unchanged.
Let \(\A \in \R^{n \times n}\) and let \(\B\) be formed from \(\A\) by performing the elementary row operation of a scalar multiplication with addition: \(aR_i+R_j\longrightarrow R_j\) for some \(a\in\R\) and some \(1\le i,j\le n\) (\(i\ne j\)). Then \(\det(\B)=\det(\A).\)Proof.
By Definition 4.1.1 we have that
\begin{align*}
\amp \det(\B)\amp \!\!\!\!=\amp \left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\
\vdots \amp \vdots \amp \vdots \amp \vdots \\
A_{j1}+\l A_{i1}\amp A_{j2}+\l A_{i2}\amp \cdots\amp A_{jn}+\l A_{in}\\
\vdots\amp \amp \ddots\amp \vdots\\
A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\amp \amp \\
\\
\amp \amp =\amp \left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\
\vdots \amp \vdots \amp \vdots \amp \vdots \\
A_{j1}\amp A_{j2}\amp \cdots\amp A_{jn}\\\vdots\amp \amp \ddots\amp \vdots\\
A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\,+\,\left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\
\vdots \amp \vdots \amp \vdots \amp \vdots \\
\l A_{i1}\amp \l A_{i2}\amp \cdots\amp \l A_{in}\\\vdots\amp \amp \ddots\amp \vdots\\
A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\amp \amp \\
\\
\amp \amp \qquad\amp\qquad\qquad(\text{by }\knowl{./knowl/xref/detdef.html}{\text{Definition 4.1.1}} \knowl{./knowl/xref/detp3.html}{\text{Property 3}})\amp\amp\\
\\
\amp \amp =\amp \left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\
\vdots \amp \vdots \amp \vdots \amp \vdots \\
A_{j1}\amp A_{j2}\amp \cdots\amp A_{jn}\\\vdots\amp \amp \ddots\amp \vdots\\
A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\,+\,
\l\left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\
\vdots \amp \vdots \amp \vdots \amp \vdots \\
A_{i1}\amp A_{i2}\amp \cdots\amp A_{in}\\\vdots\amp \amp \ddots\amp \vdots\\
A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\amp \amp \\
\\
\amp \amp \qquad\amp\qquad\qquad(\text{by }\knowl{./knowl/xref/detdef.html}{\text{Definition 4.1.1}} \knowl{./knowl/xref/detp3.html}{\text{Property 3}})\amp\amp\\
\\
\amp \amp =\amp \left|\begin{array}{cccc}A_{11}\amp A_{12}\amp \cdots\amp A_{1n}\\
\vdots \amp \vdots \amp \vdots \amp \vdots \\ A_{j1}\amp A_{j2}\amp \cdots\amp A_{jn}\\\vdots\amp \amp \ddots\amp \vdots\\
A_{n1}\amp \cdots\amp \cdots\amp A_{nn}\end{array}\right|\,+\,\l\cdot0\amp \amp \\
\\
\amp \amp \qquad\amp\qquad\qquad(\text{by }\knowl{./knowl/xref/detthmeqrows0.html}{\text{Theorem 4.4.2}}\text{ since rows i and j are equal})\amp\amp\\
\\
\amp \amp =\amp \det(\A).\amp \amp
\end{align*}
Theorem 4.4.4. Determinants and zero rows.
If \(\A\in\R^{n\times n}\) has a zero row, then \(\det(\A)=0.\)Proof.
Theorem 4.4.5. The determinant of triangular matrices.
If \(\A \in \R^{n \times n}\) is upper- or lower-triangular, then \(\det(\A)\) equals the product of the diagonal entries of \(\A.\)Proof.
\begin{equation*}
\U=\left(\begin{array}{ccccc}U_{11}\amp\times\amp\times\amp\cdots\amp\times\\0\amp U_{22}\amp\times\amp\cdots\amp\vdots\\ \vdots\amp0\amp\ddots\amp\ddots\amp\vdots\\ \vdots\amp\vdots\amp \ddots\amp\ddots\amp\times\\0\amp0\amp\cdots\amp0\amp U_{nn}\end{array}\right)
\end{equation*}
We proceed by induction on the size of the matrix \(n.\) For the base case we use \(n=2\) and observe that the determinant of \(\left(\begin{array}{cc} U_{11} \amp \times \\ 0 \amp U_{22} \end{array}\right)\) given by the formula in Theorem 4.1.2, \(\det(\U)=U_{11}\cdot U_{22}\) as desired.
The induction hypothesis is that for every upper-triangular matrix \(\U\in\R^{k\times k}\) we have \(\displaystyle{\det(\U)=\prod_{i=1}^{k}U_{ii}}.\)
Now in the induction step we assume the induction hypothesis, fix such a \(k\) and let \(\U'\in\R^{(k+1)\times(k+1)}.\) If \(U'_{11}=0\) then \(\U'\) has a zero row so by Theorem 4.4.4, \(\,\det(\U')=0=\prod_{i=1}^{k+1}U'_{ii}.\) Otherwise assume that \(U_{11}\ne0\) and expand the determinant along column one by Theorem 4.1.10. Since \(\U'_{\ast 1}\) contains only \(0\)’s below the first row, we see that \(\det(\U')=U_{11}'\cdot|\boldsymbol{M_{11}}|.\) But \(\boldsymbol{M_{11}}\) is a \(k\times k\) minor which is evidently upper-triangular so by the induction hypothesis we have \(\displaystyle{|\boldsymbol{M_{11}}|=\prod_{i=2}^{k+1}U_{ii}}.\) Thus \(\displaystyle{\det(\U')=U_{11}'\cdot\prod_{i=2}^{k+1}U_{ii}=\prod_{i=1}^{k+1}U_{ii}}.\) Thus, if we can compute the determinant of every upper-triangular matrix \(\U\in\R^{k\times k}\) by taking the product of the diagonal entries, we can do so for every upper-triangular matrix \(\U\in\R^{(k+1)\times(k+1)}.\)
We conclude by mathematical induction that for all \(\displaystyle{\U\in\mathcal{U}^{n\times n},\,\det(\U)=\prod_{i=1}^nU_{ii}}.\)
The proof for \(n\times n\) lower-triangular matrices proceeds almost identically.
Corollary 4.4.6. Determinant of diagonal matrices.
Let \(\D=\left(\begin{array}{cccc}D_{11}\amp 0\amp \cdots\amp 0\\0\amp D_{22}\amp \cdots\amp 0\\\vdots\amp \amp \ddots\amp \vdots\\ 0\amp \cdots\amp \cdots\amp D_{nn}\end{array}\right) \in \R^{n \times n}\) be a diagonal matrix. Then \(\displaystyle{\det(\D)=D_{11}D_{22}\cdots D_{nn}=\prod_{i=1}^nD_{ii}}.\)Proof.
\begin{align*}
\det(\D)\,\,\amp =\quad\left|\begin{array}{cccc}D_{11}\amp 0\amp \cdots\amp 0\\0\amp D_{22}\amp \cdots\amp 0\\\vdots\amp \amp \ddots\amp \vdots\\
0\amp \cdots\amp \cdots\amp D_{nn}\end{array}\right|\,\,=\,\,D_{11}\left|\begin{array}{cccc}1\amp 0\amp \cdots\amp 0\\0\amp D_{22}\amp \cdots\amp 0\\\vdots\amp \amp \ddots\amp \vdots\\
0\amp \cdots\amp \cdots\amp D_{nn}\end{array}\right|\\
\amp =\,D_{11}D_{22}\left|\begin{array}{cccc}1\amp 0\amp \cdots\amp 0\\0\amp 1\amp \cdots\amp 0\\\vdots\amp \amp \ddots\amp \vdots\\
0\amp \cdots\amp \cdots\amp D_{nn}\end{array}\right|\\
\amp \qquad\vdots\\
\amp =\left(\prod_{i=1}^nD_{ii}\right)\,\cdot\,\det(I)\\
\amp =\left(\prod_{i=1}^nD_{ii}\right)\qquad\qquad\qquad\qquad\text{by }\knowl{./knowl/xref/detdef.html}{\text{Definition 4.1.1}} \knowl{./knowl/xref/detp1.html}{\text{Property 1}}
\end{align*}
Corollary 4.4.7. For triangular matrices \(\U,\L\) the determinant is invariant under transposition.
Let \(\U\in\mathcal{U}^{n\times n}\) and \(\L\in\mathcal{L}^{n\times n}\text{.}\) Then
\begin{equation*}
\det(\U^T)=\det(\U)\quad\text{and}\quad\det(\L^T)=\det(\L).
\end{equation*}
Proof.
A very important property of the determinant is the fact that it is multiplicative.
Theorem 4.4.8. The determinant of a product of square matrices is the product of the determinants.
Let \(\A,\B\in\R^{n\times n}.\) Then
\begin{equation*}
\det(\A\B)=\det(\A)\det(\B).
\end{equation*}
Proof.
Let \(\A,\B \in \R^{2\times2}\) with \(\A=\left(\begin{array}{cc}a\amp b\\ c\amp d \end{array}\right)\) and \(\B=\left(\begin{array}{cc}a' \amp b'\\ c' \amp d' \end{array}\right).\) By Theorem 4.1.2 we have
\begin{equation*}
\det(\A)=ad-bc\quad\,\text{ and }\quad\det(\B)=a'd'-b'c'.
\end{equation*}
The matrix product is \(\A\B=\left(\begin{array}{cc}aa'+bc' \amp ab'+bd'\\ ca'+dc' \amp cb'+dd' \end{array}\right).\)
Computing the determinant of the product, we have
\begin{align*}
\det(\A\B)\amp =(aa'+bc')(cb'+dd')-(ab'+bd')(ca'+dc')\\
\amp =aca'b'+ada'd'+bcb'c'+bdc'd'\\
\amp\qquad\qquad -aca'b'-adb'c'-bca'd'-bdc'd'\\
\amp =ada'd'+bcb'c'-adb'c'-bca'd'\\
\amp =(ad-bc)(a'd'-b'c')\\
\amp =\det(\A)\det(\B).
\end{align*}
Theorem 4.4.9. The determinant of the inverse.
If \(\A\in\R^{n\times n}\) is a nonsingular matrix, then \(\displaystyle{\det\left(\A^{-1}\right)=\frac{1}{\det(\A)}}.\)Proof.
Theorem 4.4.10. The determinant of a permutation matrix is invariant under transposition.
Let \(\P\) be an \(n\times n\) permutation matrix. Then \(\det(\P^T)=\det(\P).\)Proof.
Theorem 4.4.11. The determinant is invariant under transposition.
Let \(\A\in\R^{n\times n}.\) Then \(\det(\A^T)=\det(\A).\)Proof.
\begin{align*}
\A^T=\amp\left(\P^{-1}_2\right)^T\U^T\L^T\left(\P^{-1}_1\right)^T\\
\amp\P_2\U^T\L^T\P_1.
\end{align*}
Now by Theorem 4.4.8, Corollary 4.4.7 and Theorem 4.4.10 we have
\begin{align*}
\det\left(\A^T\right)=\amp\det\left(\P_2\U^T\L^T\P_1\right)\\
=\amp\det\left(\P_2\right)\det\left(\U^T\right)\det(\L^T)\det\left(\P_1\right)\\
=\amp\det\left(\P_2\right)\det\left(\U\right)\det(\L)\det\left(\P_1\right)\\
=\amp\det\left(\P_1\right)\det\left(\L\right)\det(\U)\det\left(\P_2\right)\\
=\amp\det\left(\P_1^T\right)\det\left(\L\right)\det(\U)\det\left(\P_2^T\right)\\
=\amp\det\left(\P^{-1}_1\right)\det\left(\L\right)\det(\U)\det\left(\P^{-1}_2\right)\\
=\amp\det\left(\P^{-1}_1\L\U\P^{-1}_2\right)\\
=\amp\det(\A).
\end{align*}
Theorem 4.4.12. Column analogues for Property 2 of Definition 4.1.1, Theorem 4.4.2 and Theorem 4.4.4.
Let \(\A\in\R^{n\times n}.\) Then 1) if we swap two columns of a matrix then the sign of the determinant changes, 2) if \(\A\) has a zero column, then \(\det(\A)=0,\) and 3) if two columns of a matrix \(\A\) are equal, then \(\det(\A)=0.\)Proof.
Corollary 4.4.13. Performing scalar multiplication with addition any finite number of times leaves the determinant unchanged.
Let \(\A \in\R^{n \times n}\) and let \(\B\) be obtained from \(\A\) by any finite number of elementary row operations of the form \(aR_i+R_j \rightarrow R_j.\) Then \(\det(\B)=\det(\A).\)Proof.
We now have enough details to determine how performing elimination on a matrix affects the determinant.
Theorem 4.4.14. Elimination and the determinant.
Let \(\A \in \R^{n \times n}\) and let \(\U\) be the result of perfoming elimination on \(\A.\) Then \(\det(\A)=s\det(\U)\) with \(s \in \R, s \neq 0.\)Proof.
Elimination on \(\A\) can be accomplished by peforming elementary row operations so it is sufficient to determine the effect of each type. By Definition 4.1.1 Property 2 a row swap (\(R_i \leftrightarrow R_j\)) changes the determinant by a sign, and by Property 3 a scalar multiple (\(aR_i \rightarrow R_i) ,a \neq 0\) changes the determinant by a factor of \(a.\) Theorem 4.4.3 shows that the scalar multiple with addition row operation (\(\ell R_i+R_j \rightarrow R_j\)) does not change the determinant at all. Thus if \(\U\) is the reduced echelon form of a matrix \(\A,\) then the determinant of \(\A\) is a (nonzero) scalar multiple of the determinant of \(\U.\) Specifically \(\det(\A)=(-1)^k L \det(\U)\) where \(k\) is the number of row swaps and \(L \neq 0\) is the product of all the scalar multiples used in the scalar multiple row operations.
We can use these theorems to show that the value of the determinant tells us whether the matrix is invertible (nonsingular).
Theorem 4.4.15. Invertibility and the determinant.
\(\A \in \R^{n \times n}\) is nonsingular if and only if \(\det(\A)\ne0.\) Equivalently, \(\A\) is singular if and only if \(\det(\A)=0.\)Proof.
Now suppose that \(\A\in \R^{n \times n}\) is singular. Then the reduced echelon form of \(\A,\) call it \(\U,\) will have a zero row, so by Theorem 4.4.4 \(\det(\U)=0.\) By Theorem 4.4.14 we know \(\det(\A)=s\det(\U)\) for some \(s \in\R,\) thus \(\det(\A)=s\cdot 0 = 0.\)
Remark 4.4.16. The determinant by elimination.
The theorems in this section suggest another way to find the determinant of a matrix. If we put \(\A\) in reduced echelon form, \(\U,\) then we know the determinant of \(\U\) is just the product of the elements on the diagonal, and the determinant of \(\A\) is just a scalar multiple of the determinant of \(\U.\)
Furthermore, if we restrict ourselves to elementary row operations in the form of row swaps or scalar multiplication with addition, then \(\,\det(\A)=(-1)^k \det(\U)\) where \(k\) is the number of row swaps we performed.
To illustrate suppose that \(\A=\left(\begin{array}{cc} a\amp b \\ c \amp d \end{array}\right)\) is a general \(2 \times 2\) matrix. If \(a \neq 0\) then \(\A=\left(\begin{array}{cc} a\amp b \\ c \amp d \end{array}\right) \xrightarrow{-\frac{c}{a}R_1 +R_2 \rightarrow R_2} \left(\begin{array}{cc} a\amp b \\ 0 \amp -\frac{c}{a}b+d \end{array}\right)=\U.\) By Theorem 4.4.5 we know the determinant of \(\U\) is the product of the diagonals, \(\det(u)=a\cdot \left( -\frac{c}{a}b+d \right)=ad-bc\) and by Corollary 4.4.13 we have \(\det(\A)=\det(\U).\) This gives the same formula as in Theorem 4.1.2, \(\det(\A)=ad-bc.\)
Example 4.4.17. \(\boldsymbol{3\times3}\) determinant by elimination.
\begin{equation*}
\begin{array}{lll}\A=\lmatrix{rrr} 1 \amp -1 \amp 1 \\ -2 \amp 0 \amp 1\\ 0 \amp 1 \amp 4\rmatrix \amp\underset{sign=+1}{\xrightarrow{2R_1+R_2 \rightarrow R_2}}\amp\lmatrix{rrr} 1 \amp -1 \amp 1 \\ 0 \amp -2 \amp 3 \\ 0 \amp 1 \amp 4\rmatrix\\
\amp \underset{sign=+1}{\xrightarrow{\frac{1}{2}R_2 +R_3\rightarrow R_3}}\amp\lmatrix{rrr} 1 \amp -1 \amp 1 \\ 0 \amp -2 \amp 3 \\ 0 \amp 0 \amp \frac{11}{2}\rmatrix=\U.\end{array}
\end{equation*}
Note that at each stage we keep track of the sign of the determinant, which toggles between \(+1\) and \(-1\) with each row swap. Since we did not do any row swaps we have sign\(=+1\) so by Theorem 4.4.5 and Remark 4.4.16,
\begin{equation*}
\det(\A)=(\text{sign})\det(\U)=(1)(-2)\left(\frac{11}{2}\right)=-11.
\end{equation*}
Example 4.4.18. \(\boldsymbol{4\times4}\) determinant by elimination.
\begin{equation*}
\begin{array}{lll}\A=\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 0 \amp 3 \amp 1 \\ 2 \amp 2 \amp -1 \amp 1\\3 \amp -1 \amp 2 \amp 2 \rmatrix \amp \underset{sign=+1}{\xrightarrow{-2R_1+R_3 \rightarrow R_3}}\amp\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 0 \amp 3 \amp 1 \\ 0 \amp 4 \amp -9 \amp 1\\3 \amp -1 \amp 2 \amp 2 \rmatrix \\
\amp\underset{sign=+1}{\xrightarrow{-3R_1+R_2 \rightarrow R_2 }}\amp\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 0 \amp 3 \amp 1 \\ 0 \amp 4 \amp -9 \amp 1\\ 0\amp 2\amp -10\amp 2 \rmatrix\\
\amp\underset{sign=-1}{\xrightarrow{R_2\leftrightarrow R_3}}\amp\lmatrix{rrrr}1 \amp - 1 \amp 4 \amp 0 \\ 0 \amp 4 \amp -9 \amp 1\\ 0 \amp 0 \amp 3 \amp 1 \\0\amp 2\amp -10\amp 2 \rmatrix\\
\amp \underset{sign=-1}{\xrightarrow{-\frac{1}{2}R_2+R_4\rightarrow R_4}}\amp\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 4 \amp -9 \amp 1\\ 0 \amp 0 \amp 3 \amp 1 \\0\amp 0\amp -\frac{11}{2} \amp \frac{3}{2} \rmatrix\\
\amp\underset{sign=-1}{\xrightarrow{\frac{11}{6}R_3+R_4\rightarrow R_4}}\amp\lmatrix{rrrr}1 \amp -1 \amp 4 \amp 0 \\ 0 \amp 4 \amp -9 \amp 1\\ 0 \amp 0 \amp 3 \amp 1 \\0\amp 0\amp 0 \amp \frac{10}{3} \rmatrix=\U.\end{array}
\end{equation*}
Now by Theorem 4.4.5 and Remark 4.4.16 we obtain
\begin{equation*}
\det(\A)=(\text{sign})\det(\U)=(-1)(1)(4)(3)\left(\frac{10}{3}\right)=-40.
\end{equation*}