Skip to main content

Section 3.4 Solving General \(\A\x=\b\)

In Section 3.1 we saw how to determine the solution set for square systems, and that all tall systems reduce to square or wide RREF systems after discarding full zero rows. Square systems may also reduce to wide RREF systems.
We used elimination to row-reduce a given wide system to an equivalent system in row-echelon form, and learned how to determine the number of solutions in the solution set, but we did not yet learn how to solve wide systems generally. We now complete our study of the solutions of \(\A\x=\b\) by solving the general matrix-vector equation \(\A\x=\b\) when it reduces to a wide row-echelon system.
\begin{equation*} \sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim \end{equation*}

Subsection 3.4.1 Solving \(\A\x=\b\) when there are an infinite number of solutions

Solving a linear equation \(A\x=\b\) with an infinite number of solutions means writing a single- or multi-variable expression, which satisfies \(A\x=\b\) for every possible value of the variable(s) in the expression. We already have in our toolbox almost all the tools we need; the only additional concepts we will need are the notions of parameter and parametrized set.
\begin{equation*} ------------------ \end{equation*}

Definition 3.4.1. Parameters.

A parameter is a variable whose range of values are used in the definition of some set, called a parametrized set. We think of the parameter as having a fixed value for each individual element of the parametrized set.
\begin{equation*} \gt\gt\gt\gt\gt\gt\gt\gt\gt\gt\lt\lt\lt\lt\lt\lt\lt\lt\lt\lt\lt \end{equation*}

Example 3.4.2. A parametrized subset of \(\boldsymbol{\R^3}\).

The set \(\left\{\left.\left(\begin{array}{c}t\\ t^2\\ t^3\end{array}\right)\right|\,t\in[0,\infty)\right\}\) is a parametrized subset of \(\R^3,\) parametrized by \(t\) whose values range through all nonnegative real numbers.
Various elements of this parametrized set are
\begin{equation*} \left(\begin{array}{c}0\\ 0\\ 0\end{array}\right)\qquad\text{for }t=0 \end{equation*}
\begin{equation*} \left(\begin{array}{c}1\\ 1\\ 1\end{array}\right)\qquad\text{for }t=1 \end{equation*}
\begin{equation*} \left(\begin{array}{r}2\\ 4\\ 8\end{array}\right)\qquad\text{for }t=2 \end{equation*}
and so on.
\begin{equation*} =============== \end{equation*}
The parametrized sets that will be of use to us arise as the general solutions of consistent wide linear systems. Such solutions are parametrized by one or more real variables whose values range over all of \(\R.\)

Definition 3.4.3. Affine Subsets of \(\boldsymbol{\R^n}\).


Fix \(S\subset\R,\) let \(\u\in\R^n\setminus\{\vec{0}\},\) let \(\vec{a}\in\R^n.\)
  1. For \(n\ge1\) a set of the form
    \begin{equation*} \left\{\left.\vec{a}+c\u\,\right|\,c\in S\right\} \end{equation*}
    is called a one-parameter affine subset of \(\R^n\text{.}\) Geometrically, such a set is a line in \(\R^n.\)
  2. Similarly, let \(\vec{a}\in\R^n\) and \(\u,\v\in\R^n\setminus\{\vec{0}\}\) where neither of \(\u,\v\) is a linear combination of the other (that is, \(\u,\v\) are neither parallel nor antiparallel). For \(n\ge2\) a set of the form
    \begin{equation*} \left\{\left.\vec{a}+c_1\u+c_2\v\,\right|\,c_1,c_2\in S\right\} \end{equation*}
    is called a two-parameter affine subset of \(\R^n.\) Geometrically, such a set is a plane in \(\R^n.\)
  3. Continuing, let \(\vec{a}\in\R^n\) and \(\u,\v,\w\in\R^n\setminus\left\{\vec{0}\right\}\) where none of \(\u,\v,\w\) is a linear combination of the other two. For \(n\ge3\) a set of the form
    \begin{equation*} \left\{\left.\vec{a}+c_1\u+c_2\v+c_3\w\,\right|\,c_1,c_2,c_3\in S\right\} \end{equation*}
    is called a three-parameter affine subset of \(\R^n.\) Geometrically, such a set is a \(3\)-dimensional affine subset (a hyperplane) of \(\R^n.\)
  4. For \(n\ge k\ge4\) the above notion extends naturally to \(k\)-parameter affine subsets of \(\R^n,\) also called hyperplanes.

Example 3.4.4. A two-parameter affine subset of \(\R^3\).

The set \(\left\{\left.\left(\begin{array}{r}-5\\1\\9\end{array}\right)+s\left(\begin{array}{r}2\\-3\\7\end{array}\right)+t\left(\begin{array}{r}-4\\2\\1\end{array}\right)\right|\,s,t\in\R\right\}\) is a one-parameter affine subset of \(\R^3,\) parametrized by \(s\) whose values range through all real numbers.
Various elements of this parametrized set are
\begin{equation*} \left(\begin{array}{r}-5\\1\\9\end{array}\right)\qquad\text{for }s=0,\,t=0 \end{equation*}
\begin{equation*} \left(\begin{array}{r}-14\\9/2\\8\end{array}\right)\qquad\text{for }s=\frac{1}{2},\,t=\frac{5}{2} \end{equation*}
\begin{equation*} \left(\begin{array}{r}-13\\1\\26\end{array}\right)\qquad\text{for }s=2,\,t=-3 \end{equation*}
and so on, admitting ordered pairs from throughout the \(st\)-plane, with the resulting vector tips tracing out a plane in \(\R^3.\)

Remark 3.4.5. Solution sets for general \(\A\x=\b\).

We briefly return to general (not necessarily wide) systems \(\A\x=\b.\) For \(\A\in\R^{m\times n},\b\in\R^m\) the solution set for \(\A\x=\b\) is either
  1. empty (when \(\A\x=\b\) is inconsistent),
  2. a singleton \(\left\{\vec{x_p}\right\}\) when \(\A\x=\b\) has the unique solution \(\vec{x_p}\) (when \(\A\x=\b\) reduces to square consistent \(\U\x=\vec{c} / \J\x=\d\)), or
  3. an affine subset of \(\R^n\) (to be proven below in Theorem 3.4.6 and Corollary 3.4.7)
For consistent \(\A\x=\b,\) which of the latter two possibilities holds depends not on whether \(\A\x=\b\) itself is square or wide but rather on whether \(\U\x=\vec{c} / \J\x=\d\) is square or wide. Even a tall system \(\A\x=\b\) may eliminate to a wide system \(\J\x=\d\) because we discard full zero rows in the process of forming \(\J\) and \(\d.\)
We prove that for consistent wide RREF systems \(\J\x=\d\) the solution set is an affine subset of \(\R^n\) in Theorem 3.4.6, generalizing to \(\A\x=\b\) in Corollary 3.4.7).

Proof.

Let \(\J\x=\d\) be a consistent RREF system exhibiting \(k\) free variables. We prove the three statements in the theorem by assuming a simplified equivalent consistent system and solving it algebraically. We make the assumption that a solution exists, and use that assumption to solve the system algebraically. Finally we show that the solution has the properties we claim for it.
To simplify the notation in this proof we assume without loss of generality that the last \(k\) variables are free variables, and the first \(n-k\) variables are pivot variables. That is, we may reorder the columns of \(\J\) which is permissible as long as we reorder variables in \(\x\) in the corresponding way. (See Remark A.3.2 in Section A.3.)
So without loss of generality we assume that the \(n-k\) pivot variables are
\begin{equation*} x_1,\,x_2,\,\ldots\,,\,x_{n-k} \end{equation*}
and the \(k\) free variables are
\begin{equation*} x_{n-k+1},\,x_{n-k+2},\,\ldots\,,x_n \end{equation*}
for which the RREF system is
\begin{equation*} \left(\begin{array}{cccccccc}1\amp 0\amp\!\!\cdots\!\!\amp 0\amp J_{1,n-k+1}\amp J_{1,n-k+2}\amp \!\!\cdots\!\!\amp J_{1,n}\\0\amp 1\amp \!\!\cdots\!\!\amp 0\amp J_{2,n-k+1}\amp J_{2,n-k+2}\amp \cdots\amp J_{2,n}\\ \vdots\amp \amp \ddots\amp \vdots \amp \vdots\amp \vdots\amp \amp \vdots\\0\amp\!\!\cdots\!\!\amp\!\!\cdots\!\!\amp 1\amp J_{m,n-k+1}\amp J_{m,n-k+2}\amp \cdots\amp J_{m,n}\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\ \vdots\\x_{n-k}\\x_{n-k+1}\\x_{n-k+2}\\\vdots\\x_n\end{array}\right)=\left(\begin{array}{c}d_1\\d_2\\ \vdots\\d_m\end{array}\right) \end{equation*}
where \(m=n-k.\)
In this RREF system each of the \(n-k\) rows is a pivot row, each representing an equation with exactly one pivot variable and \(k\) free variables. Each such equation can be solved for its pivot variable; for each \(i=1,2,\ldots,n-k,\) we have
\begin{equation*} x_i=d_i-\J_{i,n-k+1}x_{n-k+1}-\J_{i,n-k+2}x_{n-k+2}-\cdots-\J_{i,n}x_n \end{equation*}
which yields
\begin{align*} \amp\qquad\x=\left(\begin{array}{c}x_1\\x_2\\ \vdots\\x_{n-k}\\x_{n-k+1}\\x_{n-k+2}\\\vdots\\x_n\end{array}\right)\\ \amp=\left(\begin{array}{c}d_1-J_{1,n-k+1}x_{n-k+1}-J_{1,n-k+2}x_{n-k+2}-\cdots-J_{1,n}x_n\\d_2-J_{2,n-k+1}x_{n-k+1}-J_{2,n-k+2}x_{n-k+2}-\cdots-J_{2,n}x_n\\\vdots\\d_{n-k}-J_{n-k,n-k+1}x_{n-k+1}-J_{n-k,n-k+2}x_{n-k+2}-\cdots-J_{1,n}x_n\\x_{n-k+1}\\x_{n-k+2}\\\vdots\\x_n\end{array}\right)\\ =\,\amp\left(\begin{array}{c}d_1\\d_2\\\vdots\\d_{n-k}\\0\\0\\\vdots\\0\end{array}\right)+x_{n-k+1}\left(\begin{array}{c}-J_{1,n-k+1}\\-J_{2,n-k+1}\\\vdots\\-J_{n-k,n-k+1}\\1\\0\\\vdots\\0\end{array}\right)+x_{n-k+2}\left(\begin{array}{c}-J_{1,n-k+2}\\-J_{2,n-k+2}\\\vdots\\-J_{n-k,n-k+2}\\0\\1\\\vdots\\0\end{array}\right)\,+\,\cdots\\ \amp\qquad\qquad\qquad\qquad\qquad\cdots\,\,+\,\,x_n\left(\begin{array}{c}-J_{1,n}\\-J_{2,n}\\\vdots\\-J_{n-k,n}\\0\\0\\\vdots\\1\end{array}\right)\text{.} \end{align*}
We may replace the component variables with parameter variables \(s_1,s_2,\ldots,s_k\) to obtain
\begin{align*} \amp\left(\begin{array}{c}d_1\\d_2\\\vdots\\\!d_{n-k}\!\\0\\0\\\vdots\\0\end{array}\right)+s_1\left(\begin{array}{c}-J_{1,n-k+1}\\-J_{2,n-k+1}\\\vdots\\\!-J_{n-k,n-k+1}\!\\1\\0\\\vdots\\0\end{array}\right)+s_2\left(\begin{array}{c}-J_{1,n-k+2}\\-J_{2,n-k+2}\\\vdots\\\!-J_{n-k,n-k+2}\!\\0\\1\\\vdots\\0\end{array}\right)+\cdots+s_k\left(\begin{array}{c}-J_{1,n}\\-J_{2,n}\\\vdots\\\!-J_{n-k,n}\!\\0\\0\\\vdots\\1\end{array}\right)\text{.} \end{align*}
Denoting
\begin{equation*} \vec{x_p}=\left(\begin{array}{c}d_1\\d_2\\\vdots\\d_{n-k}\\0\\0\\\vdots\\0\end{array}\right)\qquad\text{and}\qquad \vec{x_h}_i=\left(\begin{array}{c}-J_{1,n-k+i}\\-J_{2,n-k+i}\\\vdots\\-J_{n-k,n-k+i}\\|\\\wh{e_i\!}\\|\end{array}\right) \end{equation*}
we may write any solution \(\x\) of \(\J\x=\d\) as
\begin{equation*} \x=\vec{x_p}+s_1\vec{x_h}_1+s_2\vec{x_h}_2+\cdots+s_k\vec{x_h}_k,\text{ for some }s_1,\ldots,s_k\in\R. \end{equation*}

Proof.

Suppose \(\left(\A|\b\right)\) eliminates to \(\left(\J|\d\right)\) which exhibits \(k\) free variables and that
\begin{equation*} \x=\vec{x_p}+s_1\vec{x_h}_1+s_2\vec{x_h}_2+\cdots+s_k\vec{x_h}_k,\,\,\,s_1,\ldots,s_k\in\R \end{equation*}
is the general solution of \(\J\x=\d.\) By Theorem 3.1.29 the solution set of \(\J\x=\d\) is precisely the solution set of \(\A\x=\b,\) so (3.4.2) is the general solution of \(\A\x=\b.\)
Since the algorithm for finding the general solution set for a consistent wide \(\A\x=\b\) is so brief and transparent we offer it now, followed by examples. After the examples we present theorems regarding existence, uniqueness and properties.

Example 3.4.9. Solving a small consistent wide nearly-RREF system.

Let’s solve the following system
\begin{equation} \left(\begin{array}{rrr|r}2\amp -3\amp 0\amp 5\\0\amp 0\amp 5\amp -10\end{array}\right).\tag{3.4.3} \end{equation}
To get the system into RREF form, all we need to is divide by the pivots:
\begin{equation*} \begin{array}{llc} \left(\begin{array}{rrr|r}2\amp -3\amp 0\amp 5\\0\amp 0\amp 5\amp -10\end{array}\right) \!\!\!\!\amp\underset{\frac{1}{5}R_2\rightarrow R_2}{\xrightarrow{\frac{1}{2}R_1\rightarrow R_1}}\amp\!\!\!\! \left(\begin{array}{ccc|r}1\amp -3/2\amp 0\amp 5/2\\0\amp 0\amp 1\amp -2\end{array}\right).\end{array} \end{equation*}
The pivot variables are \(x_1,x_3\) and \(x_2\) is the lone free variable. The second equation immediately yields \(x_3=-2.\) Solving the first equation, we have
\begin{equation*} x_1-\frac{3}{2}x_2=\frac{5}{2}\qquad\Rightarrow\qquad x_1=\frac{5}{2}+\frac{3}{2}x_2. \end{equation*}
Having solved for each of the variables \(x_1,x_2,x_3\) in \(\x,\) we write the expression for a general solution vector to the system (3.4.3) is therefore
\begin{equation*} \x=\left(\begin{array}{c}\frac{5}{2}+\frac{3}{2}x_2\\x_2\\-2\,\,\,\end{array}\right)=\left(\begin{array}{c}5/2\\0\\-2\,\,\,\end{array}\right)+x_2\,\left(\begin{array}{c}3/2\\1\\0\end{array}\right). \end{equation*}
The fact that the second component of \(\x\) is the free variable is no longer relevant, so to streamline the notation a bit we replace the component variable \(x_2\) with the parameter variable \(s.\) The solution set is then
\begin{equation*} \left\{\left(\left.\begin{array}{c}5/2\\0\\-2\,\,\end{array}\right)+s\,\left(\begin{array}{c}3/2\\1\\0\end{array}\right)\right|\,s\in\R\right\} \end{equation*}
which represents a line in \(\R^3.\)

Example 3.4.10. Solving a consistent wide nearly-RREF system with two free variables.

We solve the system
\begin{equation*} \left(\begin{array}{rrrr|r}4\amp 8\amp 0\amp -1\amp 1\\0\amp 0\amp -7\amp 9\amp -2\end{array}\right). \end{equation*}
To get the system into RREF form, all we need to is divide by the pivots:
\begin{equation*} \begin{array}{llc} \left(\begin{array}{rrrr|r}4\amp 8\amp 0\amp -1\amp 1\\0\amp 0\amp -7\amp 9\amp -2\end{array}\right) \!\!\!\!\amp\underset{-\frac{1}{7}R_2\rightarrow R_2}{\xrightarrow{\frac{1}{4}R_1\rightarrow R_1}}\amp\!\!\!\! \left(\begin{array}{rrrr|r}1\amp 2\amp 0\amp -1/4\amp 1/4\\0\amp 0\amp 1\amp -9/7\amp 2/7\end{array}\right).\end{array} \end{equation*}
The pivot variables are \(x_1,x_3\) and \(x_2,x_4\) are free variables. We have
\begin{equation*} x_1+2x_2-\frac{1}{4}x_4=\frac{1}{4}\qquad\Rightarrow\qquad x_1=\frac{1}{4}-2x_2+\frac{1}{4}x_4 \end{equation*}
and
\begin{equation*} x_3-\frac{9}{7}x_4=\frac{2}{7}\qquad\Rightarrow\qquad x_3=\frac{2}{7}+\frac{9}{7}x_4 \end{equation*}
from which we may write the general solution to the system as
\begin{equation*} \x=\left(\begin{array}{c}\frac{1}{4}-2x_2+\frac{1}{4}x_4\\ x_2\\ \frac{2}{7}+\frac{9}{7}x_4\\x_4\end{array}\right)=\left(\begin{array}{c}1/4\\0\\2/7\\0 \end{array}\right)+x_2\left(\begin{array}{r}-2\\1\\0\\0\end{array}\right)+x_4\left(\begin{array}{c}1/4\\0\\9/7\,\\1\end{array}\right). \end{equation*}
Since the specific free variable indices are no longer important we replace the component variables \(x_2,x_4\) with the parameter variables \(s,t\) respectively to obtain the solution set
\begin{equation*} \left\{\left.\left(\begin{array}{c}1/4\\0\\2/7\\0 \end{array}\right)+s\left(\begin{array}{r}-2\\1\\0\\0\end{array}\right)+t\left(\begin{array}{c}1/4\\0\\9/7\,\\1\end{array}\right)\right|\,s,t\in\R\right\} \end{equation*}
which represents a plane in \(\R^4.\)

Example 3.4.11. Solving a consistent wide system with one free variable.

Find the solution set for the system
\begin{equation*} \left(\begin{array}{rrrr|r}2\amp -2\amp -7\amp 3\amp 20\\-4\amp 5\amp 19\amp -8\amp -49\\2\amp -1\amp -2\amp 2\amp 65\end{array}\right). \end{equation*}
Solution: Performing Gauss-Jordan elimination on the corresponding augmented matrix yields
\begin{equation*} \begin{array}{llc} \left(\begin{array}{rrrr|r}2\amp -2\amp -7\amp 3\amp 20\\-4\amp 5\amp 19\amp -8\amp -49\\2\amp -1\amp -2\amp 2\amp 5\end{array}\right) \!\!\!\!\amp\underset{-R_1+R_3\rightarrow R_3}{\xrightarrow{2R_1+R_2\rightarrow R_2}}\amp\!\!\!\! \left(\begin{array}{rrrr|r}2\amp -2\amp -7\amp 3\amp 20\\0\amp 1\amp 5\amp -2\amp -9\\0\amp 1\amp 5\amp -1\amp -15\end{array}\right)\\ \amp \xrightarrow{-R_2+R_3\rightarrow R_3}\amp\!\!\!\! \left(\begin{array}{rrrr|r}2\amp -2\amp -7\amp 3\amp 20\\0\amp 1\amp 5\amp -2\amp -9\\0\amp 0\amp 0\amp 1\amp -6\end{array}\right)\\ \amp \underset{-3R_3+R_1\rightarrow R_1}{\xrightarrow{2R_3+R_2\rightarrow R_2}}\amp\!\!\!\! \left(\begin{array}{rrrr|r}2\amp -2\amp -7\amp 0\amp 38\\0\amp 1\amp 5\amp 0\amp -21\\0\amp 0\amp 0\amp 1\amp -6\end{array}\right)\\ \amp \xrightarrow{2R_2+R_1\rightarrow R_1}\amp\!\!\!\! \left(\begin{array}{rrrr|r}2\amp 0\amp -7\amp 0\amp -4\\0\amp 1\amp 5\amp 0\amp -21\\0\amp 0\amp 0\amp 1\amp -6\end{array}\right).\end{array} \end{equation*}
We see that \(x_1,x_2,x_4\) are pivot variables and \(x_3\) is a free variable.
Solving the individual equations in the RREF system, we have
\begin{equation*} 2x_1-7x_3=-4\qquad\Rightarrow\qquad x_1=-2+\frac{7}{2}x_3, \end{equation*}
\begin{equation*} x_2+5x_3=-21\qquad\Rightarrow\qquad x_2=-21-5x_3 \end{equation*}
and finally
\begin{equation*} x_4=-6 \end{equation*}
from which we may write the general solution to the RREF system as
\begin{equation*} \x=\left(\begin{array}{c}-2+\frac{7}{2}x_3\\-21-5x_3\\x_3\\-6\end{array}\right)=\left(\begin{array}{c}-2\\-21\\\,\,0\\-6 \end{array}\right) + x_3\left(\begin{array}{c}7/2\\-5\,\\1\\0\end{array}\right). \end{equation*}
We may replace the component variable \(x_3\) by the parameter variable \(s.\) By Corollary 3.4.7 the solution set for the original system is
\begin{equation*} \left\{\left.\left(\begin{array}{c}-2\\-21\\\,\,0\\-6\end{array}\right) + s\left(\begin{array}{c}7/2\\-5\,\\1\\0\end{array}\right)\right|\,\,s\in\R\right\} \end{equation*}
which represents a line in \(\R^4.\)

Example 3.4.12. Solving a consistent wide system with two free variables.

Find the solution set to \(\A\x=\b\) for \(A=\left(\begin{array}{rrrr}3\amp 2\amp 15\amp -20\\6\amp 4\amp 10\amp -16\\-3\amp -2\amp -10\amp 14\end{array}\right)\) and \(\vec{b}=\left(\begin{array}{r}-25\\-14\\16\end{array}\right).\)
Solution: Performing Gauss-Jordan elimination on the corresponding augmented matrix yields
\begin{equation*} \begin{array}{llc} \left(\begin{array}{rrrr|r}3\!\amp \!2\!\amp \!15\!\amp \!-20\!\amp \!-25\\6\!\amp \!4\!\amp \!10\!\amp \!-16\!\amp \!-14\\-3\!\amp \!-2\!\amp \!-10\!\amp \!14\!\amp \!\!16 \end{array} \right) \!\!\!\!\amp\underset{R_1+R_3\rightarrow R_3}{\xrightarrow{-2R_1+R_2\rightarrow R_2}}\amp\!\!\!\!\! \left(\begin{array}{rrrr|r}3\!\amp \!2\!\amp \!15\!\amp \!-20\!\amp \!-25\\0\!\amp \!0\!\amp \!-20\!\amp \!24\!\amp \!36\\0\!\amp \!0\!\amp \!5\!\amp \!-6\!\amp \!-9 \end{array} \right)\\ \amp \xrightarrow{R_2\longleftrightarrow R_3}\amp\!\!\!\!\! \left(\begin{array}{rrrr|r}3\!\amp \!2\!\amp \!15\!\amp \!-20\!\amp \!-25\\0\!\amp \!0\!\amp \!5\!\amp \!-6\!\amp \!-9 \\0\!\amp \!0\!\amp \!-20\!\amp \!24\!\amp \!36\end{array} \right)\\ \amp \xrightarrow{4R_2+R_3\rightarrow R_3}\amp\!\!\!\!\! \left(\begin{array}{rrrr|r}3\!\amp \!2\!\amp \!15\!\amp \!-20\!\amp \!-25\\0\!\amp \!0\!\amp \!5\!\amp \!-6\!\amp \!-9\\0\!\amp \!0\!\amp \!0\!\amp \!0\!\amp \!0 \end{array} \right)\\ \amp \xrightarrow{-3R_2+R_1\rightarrow R_1}\amp\!\!\!\!\! \left(\begin{array}{rrrr|r}3\!\amp \!2\!\amp \!0\!\amp \!-2\!\amp \!2\\0\!\amp \!0\!\amp \!5\!\amp \!-6\!\amp \!-9\\0\!\amp \!0\!\amp \!0\!\amp \!0\!\amp \!0 \end{array} \right).\end{array} \end{equation*}
We see that \(x_1,x_3\) are pivot variables and \(x_2,x_4\) are free variables.
Solving the individual equations in the RREF system, we have
\begin{equation*} 3x_1+2x_2-2x_4=2\qquad\Rightarrow\qquad x_1=\frac{2}{3}-\frac{2}{3}x_2+\frac{2}{3}x_4 \end{equation*}
and
\begin{equation*} 5x_3-6x_4=-9\qquad\Rightarrow\qquad x_3=-\frac{9}{5}+\frac{6}{5}x_4 \end{equation*}
from which we may write the general solution to the RREF system as
\begin{equation*} \x=\left(\begin{array}{c}\frac{2}{3}-\frac{2}{3}x_2+\frac{2}{3}x_4\\x_2\\-\frac{9}{5}+\frac{6}{5}x_4\\x_4\end{array}\right)=\left(\begin{array}{c}\,\,2/3\\\,\,0\\-9/5\\\,\,0\end{array}\right) + x_2\left(\begin{array}{c}-2/3\\1\\0\\0\end{array}\right) + x_4\left(\begin{array}{c}2/3\\0\\6/5\\1\end{array}\right). \end{equation*}
We may replace the component variables \(x_2\) and \(x_4\) by the parameter variables \(s\) and \(t,\) respectively. By Corollary 3.4.7 the solution set for the original system is
\begin{equation*} \left\{\left.\left(\begin{array}{c}\,\,2/3\\\,\,0\\-9/5\\\,\,0\end{array}\right) + s\left(\begin{array}{c}-2/3\\1\\0\\0\end{array}\right) + t\left(\begin{array}{c}2/3\\0\\6/5\\1\end{array}\right)\right|\,\,s,t\in\R\right\} \end{equation*}
which represents a plane in \(\R^4.\)
(Optional) Implementing the fraction-clearing described at the bottom of Algorithm 3.4.8, we pick up the above solution after we identify the free variables \(x_2,x_4.\) The pivots are \(3\) and \(5\) so we set \(x_2=3s\) and \(x_4=15t.\) We have
\begin{align*} 3x_1+2x_2-2x_4=2\quad\amp\Rightarrow\quad 3x_1+2(3s)-2(15t)=2\\ \quad\amp\Rightarrow\quad x_1=\frac{2}{3}-2s+10t \end{align*}
and
\begin{equation*} 5x_3-6x_4=-9\quad\Rightarrow\quad 5x_3-6(15t)=-9\quad\Rightarrow\quad x_3=-\frac{9}{5}+18t \end{equation*}
from which we may write the general solution to the RREF system as
\begin{equation*} \x=\left(\begin{array}{c}\frac{2}{3}-2s+10t\\3s\\-\frac{9}{5}+18t\\15t\end{array}\right)=\left(\begin{array}{c}\,\,2/3\\\,\,0\\-9/5\\\,\,0\end{array}\right) + s\left(\begin{array}{r}-2\\3\\0\\0\end{array}\right) + t\left(\begin{array}{c}10\\0\\18\\15\end{array}\right). \end{equation*}
By Corollary 3.4.7 the solution set for the original system is
\begin{equation*} \left\{\left.\left(\begin{array}{c}\frac{2}{3}-2s+10t\\3s\\-\frac{9}{5}+18t\\15t\end{array}\right)=\left(\begin{array}{c}\,\,2/3\\\,\,0\\-9/5\\\,\,0\end{array}\right) + s\left(\begin{array}{r}-2\\3\\0\\0\end{array}\right) + t\left(\begin{array}{c}10\\0\\18\\15\end{array}\right)\right|\,\,s,t\in\R\right\} \end{equation*}
It is of interest to investigate the dependence of singular-ness of a system \(\A\x=\b\) on the entries of \(\A.\) In the next example, we allow variable entries in the target vector \(\b\) and determine which values of those variables yield an \(\A\x=\b\) with no solutions, a unique solution, or an infinite number of solutions.

Example 3.4.13. Changing the RHS of the system.

Consider the \(2\times3\) system below
\begin{equation*} \begin{array}{rrrrrrrr} \amp 5x \amp - \amp 4y \amp + \amp 6z \amp = \amp b_1\\ \amp -15x \amp + \amp 12y \amp - \amp 18z \amp = \amp b_2 \end{array}. \end{equation*}
  1. Find a right hand side (RHS) which yields an infinite number of solutions.
  2. Find a RHS which yields no solutions.
Solution: The LHS satisfies the relation \(R_2=-3R_1.\) If the RHS vector satisfies the same relation, there will be an infinite number of solutions because elimination will produce a full zero row. The vector \(\vec{b}=\left(\begin{array}{rrr} 1\\-3 \end{array} \right),\) for example, satisfies this relation and hence yields a singular system with an infinite number of solutions which are solutions of the first equation.
If we eliminate the corresponding augmented matrix we get:
\begin{equation} \left(\begin{array}{rrr|r}5\amp -4\amp 6\amp 1\\-15\amp 12\amp -18\amp -3 \end{array} \right) \xrightarrow{3R_1 + R_2 \rightarrow R_2}\left(\begin{array}{rrr|r} 5 \amp -4 \amp 6 \amp 1 \\ 0 \amp 0 \amp 0\amp 0 \end{array} \right)\tag{3.4.4} \end{equation}
and see that, as guaranteed by Corollary 3.1.32, the row-echelon form has a full zero row which guarantees an infinite number of solutions.
We can use \(b_1=1\) in a RHS which yields no solutions. For \(b_1=1\) and \(b_2=-2,\) elimination will result in a bottom LHS-only zero row, implying a contradiction to the hypothesis that a solution exists. The corresponding augmented matrix is:
\begin{equation*} \left(\begin{array}{rrr|r}5\amp -4\amp 6\amp 1\\-15\amp 12\amp -18\amp -2 \end{array} \right) \xrightarrow{3R_1 + R_2 \rightarrow R_2}\left(\begin{array}{rrr|r} 5 \amp -4 \amp 6 \amp 1 \\ 0 \amp 0 \amp 0\amp 1 \end{array} \right) \end{equation*}
indicating that the system has no solution.

Remark 3.4.14. Theoretical underpinnings.

A good algorithm gives you what you ask for, and nothing else. Can we be assured that Algorithm 3.4.8 does this, that is it gives us every solution to \(\A\x=\b\) and nothing else? The same considerations noted in Remark 3.1.5 hold in the wide case, but here we will be more detailed and rigorous in our discussion.
Existence: Let \(\left(\A|\b\right)\) eliminate to \(\left(\J|\d\right).\) If \(\J\x=\d\) is consistent, the explicit solution(s) may be obtained by solving the equations associated with each successive row for its variable in \(\x\) and designating any free variables as parameters. In this case by Corollary 3.4.7 we know that \(\A\x=\b\) has solution(s), precisely the solutions of \(\J\x=\d.\) The condition for existence of solutions to \(\A\x=\b\) is therefore the existence of solutions to the associated \(\J\x=\d.\)
Uniqueness: We will find that although the representation of a general element of the solution set of a given \(\A\x=\b\) is highly non-unique (see Section A.4), the solution set itself is unique. In Theorem 3.4.6 we prove that the solution set for a consistent wide \(\J\x=\d\) is a unique \(k\)-parameter affine subset of \(\R^n,\) leading immediately to Corollary 3.4.7 by which we know the solution set to a given consistent wide \(\A\x=\b\) is the same unique \(k\)-parameter affine subset of \(\R^n\) as the solution set for the associated \(\J\x=\d\text{.}\)

Remark 3.4.15. Free variables and their impact on solution sets.

Here we begin a discussion which leads us to the next subsection regarding nullspaces.
Free variables are just what they sound like - free to be anything we want them to be. Free variables / parameters in affine sets are allowed, really required, to range through all values in \(\R.\)
Suppose a consistent system \(\left(\A|\b\right)\) eliminates to a wide \(\left(\J|\d\right).\) Then \(\left(\J|\d\right)\) must exhibit at least one free variable, leading to a parametrized general solution \(\x\) that is an affine subset of \(\R^n.\)
When the general solution \(\x\) of \(\A\x=\b\) exhibits at least one parameter, the value of the product \(\A\x\) must be \(\d\) no matter what the values are of any of the parameters, leading to the following theorem which sets the stage for characterization of the terms in the general solution.
Repeated application of Theorem 3.4.16 yields generalization to multi-parameter affine solution sets of \(\R,\) as in the case when for any \(s,t\in\R,\,\x=\vec{a}+s\vec{u}+t\vec{v}\) solves \(\A\x=\b,\) giving us Corollary 3.4.17 to Theorem 3.4.16.
In the expression for the general solution of a consistent wide or singular system \(\A\x=\b,\) all the distinct \(\vec{x_h}_i\) satisfy \(\A\x=\vec{0}.\) It turns out that for a given \(\A,\,\,\) all vectors \(\x\) for which \(\A\x=\vec{0}\) can be constructed as linear combinations of these nonzero \(\vec{x_h}_i,\) but we will need to wait until Section 6.1 for a careful treatment.