The row space of \(\A,\) a subset of \(\R^n,\) is the set of all linear combinations of the rows of \(\A.\) Similarly the column space of \(\A,\) a subset of \(\R^m,\) is the set of all linear combinations of the columns of \(\A.\) The nullspace of \(\A,\) defined as the set of vectors \(\x\in\R^n\) for which \(\A\x=\vec{0},\) turns outs to be the set of all linear combinations of the \(\vec{x_h}_i'\)s produced Gauss-Jordan elimination.
Since each of these is a vector space we will need to be able to find a basis. In this section we will show how to find bases for each of the nullspace, row space and column space of a given matrix \(\A\) and develop some theory regarding these three important vector spaces.
Subsection6.1.1The Nullspace of \(\A\)
Remark6.1.1.About \(\boldsymbol{N(\A)}\).
The nullspace of a matrix \(\A,\) may be interpreted in two complementary ways:
\(N(\A)\) comprises the set of all vectors which are orthogonal to all of the rows of \(\A\) (and hence all linear combinations of the rows of \(\A\)).
Each vector in \(N(\A)\) contains the coefficients of linear combination of the columns of \(\A\) which equals \(\vec{0}.\) This shows that there is a linear relationship (in fact, many) between the columns of \(\A,\) which means that at least one of the columns of \(\A\) may be written in terms of the other columns of \(\A.\)
Since \(\A\x=\vec{0}\) always has at least the trivial solution, \(\A\x=\vec{0}\) is always consistent and \(N(\A)\) is never empty.
The next theorem tells us that elimination has no effect on the nullspace of a matrix, which is very convenient.
Theorem6.1.2.Elimination does not affect the nullspace.
Let \(\A\in\R^{m\times n}\) and suppose \(\U\) and \(\J\) are obtained from \(\A\) by elimination and Gauss-Jordan elimination respectively. Then \(N(\A)=N(\U)=N(\J).\)
Proof.
Fix \(\A\in\R^{m\times n}\) and suppose \(\U,\,\J\) are obtained from \(\A\) by elimination and Gauss-Jordan elimination, respectively. The nullspaces \(N(\A),\,N(\U),\,N(\J)\) are the solution sets to their respective homogeneous equations. By Theorem 3.1.29, the homogeneous equations for \(\A,\U\) and \(\J\) have the same solution set, so \(N(\A)=N(\U)=N(\J).\)
Corollary6.1.3.Elimination preserves linear relationships between columns.
Suppose that \(\A\in\R^{m\times n}\) eliminates to \(\U.\) Then
Use Gauss-Jordan elimination to convert \(\A\) to \(\J,\)
solve the homogeneous equation for \(\J,\) and
express the solution set \(N(\A)=N(\J)\) in parametric variables, citing Theorem 6.1.2.
For the homogeneous equation, there is no need to augment with a RHS when eliminating. Because elimination on a column of \(0's\) will always leave the column of \(0'\)s intact, there is no need to use the RHS of the augmented matrix representation. One may simply eliminate on \(\A\) alone, the LHS of \(\left(\A|\vec{0}\right),\) to obtain \(\left(\U|\vec{0}\right)\) or \(\left(\J|\vec{0}\right).\)
Example6.1.5.Finding the nullspace of a \(\boldsymbol{2\times2}\) matrix.
We find \(N(\A)\) for \(\A=\left(\begin{array}{rr}6\amp -2\\ -3\amp 1\end{array}\right).\)
Solution: Applying \(R_1+2R_2\rightarrow R_2\) to \(\A\) yields \(\U=\left(\begin{array}{rr}6\amp -2\\ 0\amp 0\end{array}\right)\) so \(x_1\) is a pivot variable and \(x_2\) is a free variable. We have
Replacing the component variables \(x_3,x_4\) with the parameter variables \(s,t\) and invoking Theorem 6.1.2 yields the homogeneous equation solution set
Example6.1.7.Finding the nullspace of a \(\boldsymbol{1\times4}\) matrix.
We find \(N(\A)\) for \(\A=\left(\begin{array}{rrrr}1\amp 2\amp 3\amp 4\end{array}\right).\)
Solution: A single-row matrix such as the present \(\A\) is already in RREF; \(x_1\) is the lone pivot variable and \(x_2,x_3,x_4\) are free variables. We have
Replacing the component variables \(x_2,x_3,x_4\) with the parameter variables \(r,s,t\) and invoking Theorem 6.1.2 yields the homogeneous equation solution set
which represents a three-dimensional hyperplane in \(\R^4.\)
Example6.1.8.Finding the nullspace of a \(\boldsymbol{3\times1}\) matrix.
We find \(N(\A)\) for \(\A=\left(\begin{array}{r}3\\-2\\1 \end{array}\right).\)
Solution: Performing Gauss-Jordan elimination on \(\A\) yields \(\J=\left(\begin{array}{r}1\\0\\0\end{array}\right)\) (we leave it to the reader to fill in the details). Here the only variable \(x_1\) in \(\J\x=\vec{0}\) is a pivot variable so there are no free variables. The only solution to \(\J\x=\vec{0},\) and hence to \(\A\x=\vec{0},\) is \(\x=\vec{0}=0.\) Hence \(N(\A)=\left\{0\right\},\) which represents a single point in \(\R.\)
Theorem6.1.9.Solutions to the homogeneous equation are present in all solutions to \(\A\x=\b\).
Let \(\A\in\R^{m\times n}\) and \(\b\in\R^m,\) and suppose \(\A\x=\b\) is consistent. Let \(\vec{x_h}\in N(\A)\) and suppose \(\vec{x_p}\) is any particular solution of \(\A\x=\b.\) Then \(\vec{x_h}+\vec{x_p}\) is a solution of \(\A\x=\b.\)
Proof.
The proof is a worksheet exercise.
Theorem 6.1.9 casts Corollary 3.4.7 in a new light: Any solution of a wide or singular system is necessarily the sum of some particular solution to \(\A\x=\b\) and a general nullspace vector.
Corollary6.1.10.The form of nullspace vectors.
Let \(\A\in\R^{m\times n},\) suppose \(\J\) is the RREF matrix associated with \(\A,\) and assume that \(\J\x=\vec{0}\) exhibits \(k\) free variables. Every \(\x\in N(\A)\) can be expressed as
where \(\vec{x_p}\) is a particular solution of \(\A\x=\b=\vec{0}\) and the \(\vec{x_h}_i'\) s are solutions of \(\A\x=\vec{0},\) each obtained by Algorithm 3.4.8. That is, the set \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) found by elimination or Gauss-Jordan elimination on \(\A\) is a spanning set for \(N(\A).\) We may take \(\vec{x_p}=\vec{0}\) since \(\vec{0}\) satisfies \(\A\vec{0}_n=\vec{0}_m.\)
It should not surprise us that nullspace elements take the form given in Corollary 6.1.10. For any vector \(\u\in N(\A)\) and any real number \(s,\) all vectors \(s\u\) must lie in \(N(\A)\) since \(\A\u=\vec{0}\,\Rightarrow\A s\u\,=\,s\A\u=s\vec{0}=\vec{0}.\) So if \(N(\A)\) contains anything other than \(\vec{0},\) it contains at least a line.
Indeed, every nullspace is exactly one of the following
the trivial nullspace \(\left\{\vec{0}\right\},\)
some line through the origin,
some plane through the origin, or
some \(k-\)dimensional “hyperplane” through the origin for some integer \(k\ge3.\)
Theorem6.1.11.Linear independence of the set of vectors obtained by elimination on \(\A\x=\vec{0}\).
Let \(\A\in\R^{m\times n}\) be wide or singular. The set \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) obtained by elimination or Gauss-Jordan elimination on \(\A\x=\vec{0}\) is linearly independent.
Proof.
Suppose \(\A\in\R^{m\times n}\) is wide or singular and that the set \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) was obtained by Gauss-Jordan elimination. From the proof of Theorem 3.4.6 we have
Considering the \(n-k+1^{\text{st}}\) row we must have \(c_1=0.\) Similar consideration applies to each of rows \(n-k+2,n-k+3,\ldots,n\) from which we conclude that \(c_2=c_3=\cdots=c_k=0.\) Hence by Definition 5.4.3 the set \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) obtained by Gauss-Jordan elimination is linearly independent, as desired.
Theorem6.1.12.Span of the set of vectors obtained by elimination on \(\A\x=\vec{0}\).
Let \(\A\in\R^{m\times n}\) be wide or singular. The set \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) obtained by elimination or Gauss-Jordan elimination on \(\A\x=\vec{0}\) spans \(N(\A).\)
Proof.
Suppose \(\A\in\R^{m\times n}\) is wide or singular. Let \(\vec{x_h}\in N(\A)\) and suppose that the set \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) was obtained by Gauss-Jordan elimination. Corollary 6.1.10 guarantees that \(\vec{x_h}\) can be written as a linear combination of the vectors in \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\},\) so we conclude that span\(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}=N(\A).\)
Corollary6.1.13.A basis for the nullspace.
Let \(\A\in\R^{m\times n}\) be wide or singular. The set \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) obtained by elimination or Gauss-Jordan elimination is a basis for \(N(\A).\)
Theorem6.1.14.\(\boldsymbol{\dim\,N(\A)}\) equals the number of free variables in \(\U\x=\c\).
Let \(\A\in\R^{m\times n}\) and suppose \(\A\x=\b\) eliminates to \(\U\x=\c.\) Then \(\dim\,N(\A)\) equals the number of free variables of \(\x\) in \(\U\x=\c.\)
Proof.
Assume the hypotheses. If \(\U\) is nonsingular then by Theorem 3.10.18\(\,\U\) exhibits a full set of pivots and hence \(\U\x=\c\) has no free variables. By Theorem 5.1.25\(\,N(\U)=\{\vec{0}\}\) so \(\dim\,N(\U)=0\) by Definition 5.4.44.
If \(N(\A)\ne\{\vec{0}\}\) then by Corollary 6.1.13 suppose \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) obtained by elimination is a basis for \(N(\A)\) so that \(\dim(N(\A))=k.\) By construction, each vector in \(\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}\) is associated with a unique free variable in \(\U\x=\c\) so there are \(k\) free variables.
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if \(\dim N(\A)=0.\)
Proof.
The proof is a homework exercise.
Theorem6.1.17.Invertibility and \(\dim N(\A)\).
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if \(\dim N(\A^T)=0.\)
Proof.
The proof is a homework exercise.
Note that if \(\A\in\R^{n\times n}\) is invertible, then \(N(\A^T)=N(\A)=\{\vec{0}\}\) and there is no basis for \(N(\A^T).\) But we can also have \(N(\A^T)=\{\vec{0}\}\) and no basis for \(N(\A^T)\) when \(\A\in\R^{m\times n},\,m\lt n,\) and the rows of \(\A\) are linearly independent as in
You should try to think of more creative examples of this circumstance. Again though, the more interesting case of \(N(\A^T)\) is when \(\A\in\R^{m\times n},\,m\gt n\) which guarantees that there will be zero rows after elimination. Then the number of zero rows is exactly \(m-r.\)
Subsection6.1.2Row Spaces and Column Spaces
Definition6.1.18.The row space of \(\A\).
The row space of \(\A\in\R^{m\times n},\) denoted row\((\A),\) is the span of the set of \(m\) rows of \(\A\) and is a subset of \(\R^n:\)
Theorem6.1.21.Row\((\A)\) and col\((\A)\) are subspaces, but not of the same space unless \(\A\) is square.
Let \(\A \in \R^{m \times n}.\) Then row\((\A)\) is a subspace of \(\R^{1 \times n}\) and col\((\A)\) is a subspace of \(\R^{m \times 1}.\)
Since an elementary row operation replaces one row by linear combinations of other rows, the next theorem follows immediately from the last.
Theorem6.1.22.row\((\U)=\) row\((\A)\).
If \(\A\in\R^{m\times n}\) can be transformed to \(\U\) by a series of elementary row operations, then row\((\A)=\) row\((\U).\)
Proof.
Suppose \(\A\in\R^{m\times n}\) eliminates to \(\U.\) By Definition 3.1.27 we need to show that the row space of \(\A\) is unaffected by each of the two row operations in Definition 3.1.26. Certainly the row space in left invariant by a row swap since sets are unordered collections and are unaffected by reordering. It remains to show that the row space is invariant under linear combination replacement.
So consider the set \(\{\A_{i\ast}\}\) of rows of \(\A\) and let \(\u\in\text{row}(\A):\)
We implement the general linear combination replacement \(\a_1R_i+\a_2R_j\rightarrow R_j,\,\a_2\ne0\) on \(\A,\) yielding \(\B\) which has the set of rows
and since \(\v\) is general we have shown that \(\text{row }(\A)\subseteq\text{row }(\B).\) To show the reverse subset inclusion, let \(\v\in\text{row}(\B):\)
showing that \(\text{row }(\B)\subseteq\text{row }(\A).\) With both subset directions proved, we conclude that \(\text{row }(\B)=\text{row }(\A).\)
Now \(\U\) is obtained from \(\A\) by a finite sequence of row swaps and/or linear combination row replacement, creating a finite sequence of intermediary matrices each of which has the same row space as its immediate predecessor. So (by the transitivity of equality) \(\text{row }(\U)=\text{row }(\A).\)
Theorem6.1.23.The pivot rows of \(\U\) are a basis for row\((\A)\).
Suppose that \(\U\) is a row echelon form for \(\A.\) Then the pivot rows of \(\U\) form a basis for row\((\A).\)
Proof.
Let \(\A\in\R^{m\times n}\) eliminate to a row-echelon form \(\U\) with \(k\) pivots in rows \(p_1,p_2,\ldots,p_k.\) By Theorem 6.1.22 we know that the set of rows of \(\U\) span \(\text{row }(\A).\) To show linear independence of the set \(\{\U_{p_1\ast},\U_{p_2\ast},\ldots,\U_{p_k\ast}\}\) , suppose that
so \(c_1=0.\) Proceeding similarly through each pivot row in turn we find that each of the \(c_i\) must be zero, showing that \(\{\U_{p_1\ast},\U_{p_2\ast},\ldots,\U_{p_k\ast}\}\) satisfies Definition 5.4.3 so the set of pivot rows of \(\U\) is linearly independent. Now by Definition 5.4.25 the set of pivot rows of \(\U\) is a basis for \(\text{row }(\A).\)
Example6.1.24.
We find a basis for the rowspace of the matrix \(\A=\left( \begin{array}{rrrr} 1 \amp 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \amp 0 \\ 2 \amp 3 \amp 1 \amp 2 \end{array}\right).\) The matrix \(\A\) can be eliminated to \(\U=\left( \begin{array}{rrrr} 1 \amp 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{array} \right).\) By Theorem 6.1.23, a basis for row\((\A)\) is \(\{ (1,1,0,1), (0,1,1,0)\}.\)
Example6.1.25.
Let \(\A\) be the matrix in Example 3.4.12. Determine whether \(\begin{pmatrix}1\\2\\2\end{pmatrix} \in\text{ col} (\A).\)
Solution: We make this determination by attempting to solve \(\A\x=\begin{pmatrix}1\\2\\2 \end{pmatrix}.\) Either we can solve this system or we will obtain a contradiction in which case we will have shown that no linear combination of the columns of \(\A\) equals \(\begin{pmatrix}1\\2\\2 \end{pmatrix},\) showing that \(\begin{pmatrix}1\\2\\2 \end{pmatrix}\) lies outside col\((\A).\)
We have already eliminated \(\A,\) so to avoid duplicating our efforts we need only apply the row operations we used in elimination in Example 3.4.12 to the RHS vector \(\begin{pmatrix}1\\2\\2\end{pmatrix}\)
A LHS-only zero row indicates that the assumption of the existence of a solution leads to a contradiction, so we conclude that \(\begin{pmatrix}1\\2\\2\end{pmatrix} \not\in\) col\((\A).\)
We note that determining if a vector \(\v\in\text{col}(\A)\) is equivalent to determining if there is a solution to the equation \(\A\x=\v.\)
Theorem6.1.26.Col\((\A)\) is a subspace of \(\R^m\).
Let \(\A\in\R^{m \times n}.\) The column space of \(\A,\) col(\(\A\)) is a subspace of \(\R^m.\)
Proof.
The proof is a homework problem.
We will give a method for an easy way to find a basis for the column space too, but first we need some theorems.
Theorem6.1.27.The set of pivot columns of \(\U\).
Suppose that \(\U\in \R^{m\times n}\) is in row-echelon form. Then the pivot columns of \(\U\) are a basis for col\((\U).\)
Proof.
Let \(\U\in\R^{m\times n}\) be in row-echelon form. If \(N(\U)=\{\vec{0}\}\) the by Theorem 5.4.6 the set of columns of \(\U\) is linearly independent and since by construction it spans col\((\U)\) and is hence a basis for col\((\U)\) by Definition 5.4.25. So suppose then that \(N(\U)\) is nontrivial with basis \(\boldsymbol{\Psi}=\left\{\vec{x_h}_1,\vec{x_h}_2,\ldots,\vec{x_h}_k\right\}.\) We assume for simplicity that the first \(n-k\) columns of \(\U\) are pivot columns just as we did in the proof of Theorem 3.4.6.
Recall from Theorem 3.4.6 the form of \(\vec{x_h}_i\) after we further eliminated \(\U\) to \(\J:\)
From this form we see that \(\vec{x_h}_i\) allows us to solve \(\J\vec{x_h}_i=\vec{0}\) for \(\J_{\ast(n-k+i)},\) and only for \(\J_{\ast(n-k+i)},\) in terms of the pivot columns of \(\J.\) By Corollary 6.1.3 we may solve \(\U\vec{x_h}_i=\vec{0}\) for \(\U_{\ast(n-k+i)},\) and only for \(\U_{\ast(n-k+i)},\) in terms of the pivot columns of \(\U;\) we may do this for each of the \(k\) free columns in \(\U.\)
Since each of the \(k\) free columns in \(\U\) may be expressed in terms of the pivot columns of \(\U,\) by Lemma 5.4.16 (applied \(k\) times) each of the free columns of \(\U\) contribute nothing to the spans of the set of columns of \(\U,\) so the set of pivot columns of \(\U\) spans col\((\U).\)
The set \(\{\J_{\ast 1},\J_{\ast 2},\ldots,\J_{\ast(n-k)}\}\) of pivot columns of \(\J\) is linearly independent by Lemma 5.4.5 because each of these pivot columns is a canonical unit vector in \(\R^m.\) The corresponding set of pivot columns of \(\U\) is linearly independent by Corollary 6.1.3. Now by Definition 5.4.25, the set of pivot columns of \(\U\) is a basis for col\((\U).\)
Then columns \(1\) and \(2\) are the pivot columns and are clearly linearly independent so the proof tells us our basis vectors will be those columns: \(\{\U_{\ast1},\U_{\ast2}\}.\) Consider the nonpivot column \(3\text{:}\)\(\U_{\ast3} = \left(\begin{array}{r} 4 \\2 \end{array} \right).\) Since \(U_{13}=4\) is not a pivot then there must be a pivot in row \(1\) to the left of \(4.\) There is; it is \(U_{11}=2.\) The proof tells us to set \(a_1 = \frac{U_{13}}{U_{11}}=\frac{4}{2}.\) Similarly, \(U_{23} = 2\) is not a pivot so we look for the pivot in row \(2\text{:}\)\(U_{22}=3.\) Then \(a_2 = \frac{U_{23}}{U_{22}}=\frac{2}{3}.\) We then see that the nonpivot column can be written in terms of the pivot columns:
We could similarly find a relation for the other nonpivot row showing that indeed the linearly independent set \(\{ \U_{\ast1}, \U_{\ast2}\}\) spans the rows and so is a basis for \(\U.\)
Definition6.1.29.Pivots and free columns of \(\A\).
Let Let \(\A\in\R^{m\times n}\) eliminate to \(\U.\) We denote the pivot columns of \(\,\A\) as the columns of \(\A\) corresponding to pivot columns of \(\U.\) All other columns of \(\A\) are called free columns.
Theorem6.1.30.The set of pivot columns of \(\A\) is a basis for col\((\A)\).
Suppose \(\A\in\R^{m\times n}.\) The set of pivot columns of \(\A\) is a basis for \(\text{col}(\A).\)
Proof.
Let \(\A\in\R^{m\times n}\) eliminate to \(\U.\) By Theorem 6.1.27 the set \(\boldsymbol{\Psi_\U}\) of pivot columns of \(\U\) is linearly independent, so by Corollary 6.1.3 the set \(\boldsymbol{\Psi}_\A\) of pivot columns of \(\A\) is also linearly independent.
It remains to show that \(\boldsymbol{\Psi}_\A\) spans col\((\A).\) The full set of columns of \(\A\) certainly spans col\((\A),\) so if \(\U\) and hence \(\A\) have only pivot columns then the theorem holds. If on the other hand \(N(\U)\) is nontrivial then by Corollary 6.1.3 all nontrivial relations between the columns of \(\U\) are also nontrivial relations between the columns of \(\A.\) This means that by Lemma 5.4.16 each of the free columns of \(\A\text{,}\) like the corresponding free columns of \(\U,\) contribute nothing to the span of the set of columns of \(\A\) and may be discarded. Hence \(\boldsymbol{\Psi}_\A\) spans, and is therefore a basis for, \(\text{col }\A.\)
Note that we cannot use the pivot columns of the matrix \(\U\) in the above proof as our basis for col\((\A),\) we must use the pivot columns of \(\A.\) An example will illustrate why this is true.
Example6.1.31.
We find a basis for col\((\A)\) for the matrix \(\A=\left(\begin{array}{rrrr} 1 \amp 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \amp 0 \\ 2 \amp 3 \amp 1 \amp 2 \end{array} \right).\) We know that this can be reduced via elementary operations to \(\U=\left(\begin{array}{rrrr} 1 \amp 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{array} \right).\) Since \(\U\) has pivots in columns \(1\) and \(2,\)Theorem 6.1.30 tells us that the basis for the column space is formed by the corresponding columns of \(\A\text{:}\)\(\left( \begin{array}{r} 1\\0 \\2 \end{array} \right),\) and \(\left( \begin{array}{r} 1\\1 \\3 \end{array} \right).\) Thus col(\(\A\)) = span\(\left\{ \left( \begin{array}{r} 1\\0 \\2 \end{array} \right), \left( \begin{array}{r} 1\\1 \\3 \end{array} \right)\right\}.\)
Note the corresponding pivot columns of \(\U\) both have zeros in the third row. Clearly those can’t be a basis for the col(\(\A\)) since the span of these vectors would only contain vectors with zero in the third row. However col(\(\A\)) clearly contains vectors with nonzero entries in the third row.
Example6.1.32.
Let \(\A=\left(\begin{array}{rrrr}1\amp 1\amp 2\amp 4\\1\amp 2\amp 2\amp 5\\1\amp 3\amp 2\amp 6 \end{array} \right).\) Find bases for \(N(\A),\) col\((\A),\) and row\((\A)\) and write each space in element-form notation.
Solution:\(\A\rightarrow\U=\left(\begin{array}{rrrr}1\amp 1\amp 2\amp 4\\0\amp 1\amp 0\amp 1\\0\amp 0\amp 0\amp 0 \end{array} \right)\) which yields pivot variables \(x_1,x_2\) and free variables \(x_3,x_4.\) From \(\U\x=\vec{0}\) we have the equations \(x_1+x_2+2x_3+4x_4\) and \(x_2+x_4=0,\) so letting \(x_3=1,\,x_4=0\) and \(x_3=0,\,x_4=1\) in turn gives us basis vectors \(\left(\begin{array}{r}-2\\0\\1\\0 \end{array} \right)\) and \(\left(\begin{array}{r}-3\\-1\\0\\1 \end{array} \right).\) Thus a basis for \(N(\A)\) is \(\left\{\left(\begin{array}{r}-2\\0\\1\\0 \end{array} \right),\,\left(\begin{array}{r}-3\\-1\\0\\1 \end{array} \right)\right\}\) and
We may use the two pivot rows \(\U_{1\ast}\) and \(\U_{2\ast}\) of \(\U\) as a basis for row\((\A)\) so a basis for row\((\A)\) is \(\left\{\left(\begin{array}{rrrr}1\amp 1\amp 2\amp 4 \end{array} \right),\left(\begin{array}{rrrr}0\amp 1\amp 0\amp 1 \end{array} \right)\right\},\) and
Now the pivot columns of \(\A\) are \(\A_{\ast1}\) and \(\A_{\ast2}\) so a basis for col\((\A)\) is \(\left\{\begin{pmatrix}1\\1\\1 \end{pmatrix} ,\begin{pmatrix}1\\2\\3 \end{pmatrix} \right\},\) and
\begin{equation}
\text{ col } (\A)=\left\{\left.a\begin{pmatrix} 1\\1\\1\end{pmatrix}+b\begin{pmatrix} 1\\2\\3\end{pmatrix}\right|\,a,b\in\R\right\}\text{.}\tag{6.1.5}
\end{equation}
Theorem6.1.33.Invertibility and \(\dim\,\text{row}\,(\A)\).
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if \(\dim\,\text{row}\,(\A)=n.\)
Proof.
The proof is a homework exercise.
Theorem6.1.34.Invertibility and \(\dim\,\text{col}\,(\A)\).
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if \(\dim\,\text{col}\,(\A)=n.\)
Proof.
The proof is a homework exercise.
Subsection6.1.3The rank of a matrix
It is not obvious that the maximal number of linearly independent rows of \(\A\) and the maximal number of linearly independent columns of \(\A\) are the same, but we will prove this fact in this subsection. The common such number, which gives us important information about \(\A,\) is essentially the dimension of the spans of the set of rows or of the set of columns of \(\A\) and is called the rank of \(\A.\)
Definition6.1.35.The row rank and column rank of \(\A\).
Let \(\A\in\R^{m\times n}.\) The row rank of \(\A\) is the maximal number of linearly independent rows in \(\A\) and the column rank of \(\A\) is the maximal number of linearly independent columns in \(\A.\)
Theorem6.1.36.Row rank equals column rank.
Let \(\A\in\R^{m\times n}.\) The row rank of \(\A\) equals the column rank of \(\A.\)
Proof.
We have made this proof relatively easy with the groundwork we have laid to this point. Let \(\A\) eliminate to \(\U.\) By Lemma 5.4.41 the maximal number of linearly independent row vectors in \(\A\) is given by \(\dim\text{ row }\A=\dim\text{ row }\U\) which equals the number of pivot rows of \(\U.\) But this also equals the number of pivot columns of \(\U,\) since each pivot is identified with a unique row and a unique column. Since by Theorem 6.1.27 the set of pivot columns is a basis for \(\text{col }\U\) and corresponds to a basis for \(\text{col }\A\) consisting of columns of \(\A,\) which by Lemma 5.4.41 is a set with the maximal number of linearly independent columns in \(\A.\)
Definition6.1.37.The rank of \(\A\).
Let \(\A\in\R^{m\times n}.\) The rank of \(\A,\) denoted \(r(\A),\) is the common value of the row rank of \(\A\) and the column rank of \(\A.\)
Theorem6.1.42.Invertibility of \(\A\) and \(\boldsymbol{r(\A)}\).
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if \(r(\A)=n.\)
Proof.
Let \(\A\in\R^{n\times n}.\)
\((\Rightarrow)\) If \(\A\) is invertible then by Theorem 5.4.8 the set of (all \(n\)) columns of \(\A\) is linearly independent, so by Definition 6.1.35 and Definition 6.1.37 we have \(r(\A)=n.\)
\((\Leftarrow)\) If \(r(\A)=n\) then by Definition 6.1.37 and Definition 6.1.35 there is a set of \(n\) columns of \(\A\) which is linearly independent. But there is only one set of \(n\) columns of \(\A\) so the set of \(n\) columns of \(\A\) is linearly independent, and by Theorem 5.4.8\(\,\A\) is invertible.
Theorem6.1.43.Matrix Inverse Equivalence - The Big Theorem Part 3.
Let \(\A\in\R^{n\times n}.\) The following statements are equivalent (TFAE):