Skip to main content

Section 3.13 The LU-Factorization

In any mathematical system in which products are defined, the process of factoring is always of great interest. Factoring real numbers is largely trivial: (\(1=\frac{1}{\pi}\cdot\pi=\frac{11}{6}\cdot\frac{6}{11}\) and so on), but we may factor natural numbers into the product of prime powers, for example, or perhaps factor polynomials into a product of lower-degree polynomials. A complex number \(z=x+iy\) may be factored as \(z=re^{i\theta}.\)
The factorization problem for matrices has a number of distinct solutions. In this section we define and study the process of factoring \(\A\in\R^{n\times n}\) into a product \(\A=\L\U\) of an upper-triangular matrix \(\U\) and a lower-triangular matrix \(\L,\) which can be done for any \(\A\in\R^{n\times n}.\)
Matrix factorizations are typically called decompositions. In linear algebra, though, the decomposition of a vector into a linear combination of other vectors is a fundamental operation so as part of our effort to disambiguate notation and terminology we will call \(\A=\L\U\) the LU-factorization of \(\A.\) A quite different factorization, the QR-factorization, will be given in a later section.

Definition 3.13.1. Lower-triangular matrices.


\(\A\in\R^{n\times n}\) is lower-triangular if \(A_{ij}=0\) when \(i\lt j.\)
\begin{equation*} \L=\left( \begin{array}{ccccc} \ast \amp 0 \amp 0 \amp \cdots \amp 0 \\ \ast \amp \ast \amp 0 \amp \cdots \amp 0 \\ \ast \amp \ast \amp \ast \amp \ddots \amp \vdots \\ \vdots \amp \vdots \amp \vdots \amp \ddots \amp 0 \\ \ast \amp \ast \amp \cdots \amp \cdots \amp \ast \\ \end{array}\right). \end{equation*}
Figure 3.13.2. Lower triangular matrices. The \(\ast\) represent possibly distinct arbitrary real numbers. Note the symmetry with Figure 3.1.19.

Definition 3.13.3. The sets \(\boldsymbol{\mathcal{U}}^{n\times n}\) and \(\boldsymbol{\mathcal{L}}^{n\times n}\).

We define the sets \(\mathcal{U}^{n\times n}\subset\R^{n\times n}\) and \(\mathcal{L}^{n\times n}\subset\R^{n\times n}\) to contain all \(n\times n\)upper-triangular matrices and all \(n\times n\)lower-triangular matrices, respectively.

Remark 3.13.8. Upper- and lower-triangular elementary matrices.

Revisiting Remark 3.7.9, we observe that 
  1. Every elementary matrix \(\E\) which changes only a row below the diagonal is a lower-triangular matrix, and
  2. Every elementary matrix \(\E\) which changes only a row above the diagonal is an upper-triangular matrix.
This leads to the following theorem.

Proof.

The proof is a homework exercise.

Example 3.13.11. LU-factorization.

As an example of how the \(LU\)-factorization works, consider the matrix \(\A=\left(\begin{array}{rrr}1\amp -3\amp 11\\2\amp -5\amp 19\\1\amp -6\amp 7 \end{array} \right).\) We have
\begin{equation*} \begin{array}{llc}\left(\begin{array}{rrr|rrr}1\amp -3\amp 11\amp 1\amp 0\amp 0\\2\amp -5\amp 19\amp 0\amp 1\amp 0\\1\amp -6\amp 7\amp 0\amp 0\amp 1 \end{array} \right) \amp \underset{(-1)R_1+R_3\rightarrow R_3}{\xrightarrow{-2R_1+R_2\rightarrow R_2}}\amp\left(\begin{array}{rrr|rrr}1\amp -3\amp 11\amp 1\amp 0\amp 0\\0\amp 1\amp -3\amp -2\amp 1\amp 0\\0\amp -3\amp -4\amp -1\amp 2\amp 1 \end{array} \right)\\ \amp \xrightarrow{3R_2+R_3\rightarrow R_3}\amp\left(\begin{array}{rrr|rrr}1\amp -3\amp 11\amp 1\amp 0\amp 0\\0\amp 1\amp -3\amp -2\amp 1\amp 0\\0\amp 0\amp -13\amp -7\amp 3\amp 1 \end{array} \right)\end{array} \end{equation*}
so
\begin{align*} \left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 3\amp 1 \end{array} \right)\!\!\left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\-1\amp 0\amp 1 \end{array} \right)\!\!\left(\begin{array}{rrr}1\amp 0\amp 0\\-2\amp 1\amp 0\\0\amp 0\amp 1\end{array}\right)\!\! \left(\begin{array}{rrr}1\amp -3\amp 11\\2\amp -5\amp 19\\1\amp -6\amp 7 \end{array} \right)\amp\!\,=\!\!\left(\begin{array}{rrr}1\amp -3\amp 11\\0\amp 1\amp -3\\0\amp 0\amp -13 \end{array} \right). \end{align*}
Each of the elementary matrices on the left are invertible and by Theorem 3.7.7 finding those inverses requires no calculation:
\begin{align*} \left(\begin{array}{rrr}1\amp 0\amp 0\\-2\amp 1\amp 0\\0\amp 0\amp 1 \end{array} \right)^{-1}\amp =\,\,\left(\begin{array}{rrr}1\amp 0\amp 0\\2\amp 1\amp 0\\0\amp 0\amp 1 \end{array} \right)\\ \left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\-1\amp 0\amp 1 \end{array} \right)^{-1}\amp =\,\,\left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\1\amp 0\amp 1 \end{array} \right)\\ \left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 3\amp 1 \end{array} \right)^{-1}\,\,\amp =\left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp -3\amp 1 \end{array} \right) \end{align*}
Invoking Theorem 3.13.9 we have
\begin{align*} \left(\begin{array}{rrr}1\amp -3\amp 11\\2\amp -5\amp 19\\1\amp -6\amp 7 \end{array} \right)\amp=\left(\begin{array}{rrr}1\amp 0\amp 0\\2\amp 1\amp 0\\0\amp 0\amp 1 \end{array} \right)\!\!\left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\1\amp 0\amp 1\end{array}\right)\!\!\left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp -3\amp 1 \end{array} \right)\!\!\left(\begin{array}{rrr}1\amp -3\amp 11\\0\amp 1\amp -3\\0\amp 0\amp -13 \end{array} \right)\nonumber\\ \amp \,\,=\,\,\left(\begin{array}{rrr}1\amp 0\amp 0\\2\amp 1\amp 0\\1\amp -3\amp 1 \end{array} \right)\left(\begin{array}{rrr}1\amp -3\amp 11\\0\amp 1\amp -3\\0\amp 0\amp -13 \end{array} \right)\,\,=\,\,\L\U\text{.} \end{align*}

Remark 3.13.12.

Computing the LU-factorization via Theorem 3.13.9 is straightforward but it is always a great idea to view a process through a different lens as long as it doesn’t cost too much.
An alternative approach to computing the LU-factorization is to compute \(\L\) directly from \(\L^{-1};\) it requires minimal work to find the inverse of a lower-triangular matrix (when it exists, of course). This approach is described in the following algorithm.

Remark 3.13.14. A small labor-saver.

Consider the three elementary matrices \(\E_{(a)21},\,\E_{(b)31},\,\E_{(c)32}\in\R^{3\times3},\) used for eliminating below the diagonal in \(\A\in\R^{3\times n}.\) We have
\begin{equation*} \left(\begin{array}{rrr}1\!\amp \!\!\!0\!\!\amp 0\\0\!\amp \!\!\!1\!\!\amp 0\\0\!\amp \!\!\!c\!\!\amp 1 \end{array}\right)\!\!\left(\begin{array}{rrr}1\!\amp \!\!\!0\!\!\amp 0\\0\!\amp \!\!\!1\!\!\amp 0\\b\!\amp \!\!\!0\!\!\amp 1\end{array}\right)\!\!\left(\begin{array}{rrr}1\!\amp \!\!\!0\!\!\amp 0\\a\!\amp \!\!\!1\!\!\amp 0\\0\!\amp \!\!\!0\!\!\amp 1 \end{array} \right)\!\left(\begin{array}{rrr}A_{11}\amp \!\!A_{12}\!\!\amp A_{13}\\ A_{21}\amp \!\!A_{22}\!\!\amp A_{23}\\A_{31}\amp \!\!A_{32}\!\!\amp A_{33}\end{array} \right)\!=\!\left(\begin{array}{ccc}U_{11}\amp \!\!U_{12}\!\!\amp U_{13}\\0\amp \!\!U_{22}\!\!\amp U_{23}\\0\amp \!\!0\!\!\amp U_{33}\end{array} \right) \end{equation*}
and, interestingly,
\begin{equation*} \left(\begin{array}{r\!r\!r}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp c\amp 1 \end{array} \right)\left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\b\amp 0\amp 1 \end{array} \right) \left(\begin{array}{rrr}1\amp 0\amp 0\\a\amp 1\amp 0\\0\amp 0\amp 1 \end{array} \right)=\left(\begin{array}{rrr}1\amp 0\amp 0\\a\amp 1\amp 0\\b\amp c\amp 1 \end{array} \right)=\L^{-1}. \end{equation*}
This property holds for \(m\gt3\) as well. So rather than eliminating \((\A|\I)\rightarrow\left(\U|\L^{-1}\right),\) we eliminate \(\A\rightarrow\U\) and simply write down \(\L^{-1}\) from the coefficients of the row operations we used. This allows us to avoid repeatedly rewriting the RHS of an augmented matrix.
But beware! This nice relation does not always hold if the order of row operations is changed (as one might expect, since the associated matrix multiplication is not commutative).
In a \(3\times3\) setting for example, if we perform \(cR_2+R_3\rightarrow R_3\) before \(aR_1+R_2\rightarrow R_2\) to eliminate \(A_{21},\) we have
\begin{equation*} \left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\b\amp 0\amp 1 \end{array} \right)\left(\begin{array}{rrr}1\amp 0\amp 0\\a\amp 1\amp 0\\0\amp 0\amp 1\end{array}\right)\left(\begin{array}{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp c\amp 1 \end{array} \right)=\left(\begin{array}{ccc}1\amp 0\amp 0\\a\amp 1\amp 0\\ac+b\amp c\amp 1 \end{array} \right) \end{equation*}

Remark 3.13.15. \(\L\D\boldsymbol{\widetilde{U}}\) factorization.

Any upper-triangular matrix \(\U\) can be factored as \(\U=\D\widetilde{\U}\) where \(\D\) is a diagonal matrix with
\begin{equation*} D_{ii}=\begin{cases}U_{ii}\amp U_{ii}\ne0\\1\amp U_{ii}=0\end{cases} \end{equation*}
and \(\widetilde{\U}\) is upper triangular with
\begin{equation*} \widetilde{U}_{ii}=\begin{cases}1\amp U_{ii}\ne0\\0\amp U_{ii}=0\,.\end{cases} \end{equation*}
If \(\U\) has a zero on one of the diagonal positions (say \(U_{ii}=0\)), then we set \(D_{ii}=1\) and \(\widetilde{\U}_{i\ast}=\U_{i\ast}.\)
So once we have an \(\L\U\)-factorization for \(\A\) we can immediately compute what we call its \(\L\D\widetilde{\U}\)-factorization. Proceeding from \(\L\U\) to \(\L\D\widetilde{\U}\) is easier in practice than we might expect from the notation above.

Example 3.13.16. \(\L\U\)-factorization and \(\L\D\boldsymbol{\widetilde{U}}\)-factorization.

We factor two matrices; \(\A\) eliminates to a \(\U\) with no zeros on its diagonal while \(\B\) eliminates to a \(\U\) with a zero on its diagonal.
\begin{align*} \A\amp=\left(\begin{array}{rrr}3\amp -2\amp -4\\6\amp -6\amp -9\\-15\amp -2\amp 19\end{array}\right)=\left(\begin{array}{rrr}1\amp 0\amp 0\\2\amp 1\amp 0\\-5\amp 6\amp 1\end{array}\right)\left(\begin{array}{rrr}3\amp -2\amp -4\\0\amp -2\amp -1\\0\amp 0\amp 5\end{array}\right)=\L\U\\ \amp=\left(\begin{array}{rrr}1\amp 0\amp 0\\2\amp 1\amp 0\\-5\amp 6\amp 1\end{array}\right)\left(\begin{array}{rrr}3\amp 0\amp 0\\0\amp -2\amp 0\\0\amp 0\amp 5\end{array}\right)\left(\begin{array}{rrr}1\amp -2/3\amp -4/3\\0\amp 1\amp 1/2\\0\amp 0\amp 1\end{array}\right)\,\,=\,\,\L\D\widetilde{\U} \end{align*}
\begin{align*} \B\amp=\left(\begin{array}{rrr}2\amp 4\amp 9\\10\amp 17\amp 46\\6\amp 33\amp 20\end{array}\right)=\left(\begin{array}{rrr}1\amp 0\amp 0\\5\amp 1\amp 0\\3\amp -7\amp 1\end{array}\right)\left(\begin{array}{rrr}2\amp 4\amp 9\\0\amp -3\amp 1\\0\amp 0\amp 0\end{array}\right)=\L\U\\ \amp=\left(\begin{array}{rrr}1\amp 0\amp 0\\5\amp 1\amp 0\\3\amp -7\amp 1\end{array}\right)\left(\begin{array}{rrr}2\amp 0\amp 0\\0\amp -3\amp 0\\0\amp 0\amp 1\end{array}\right)\left(\begin{array}{rrr}1\amp 2\amp 9/2\\0\amp 1\amp -1/3\\0\amp 0\amp 0\end{array}\right)\,\,=\,\,\L\D\widetilde{\U} \end{align*}