In addition to being CLC and satisfying the vector space axioms, vector spaces exhibit an additional important property. In each nontrivial vector space \(V\) we will be able to identify a finite, minimal set \(\mathcal{B}\) of vectors, called a basis, for which every vector in \(V\) can be written as a linear combination of the vectors in \(\mathcal{B}.\) To characterize bases we introduce the concepts of span and linear independence.
In this section we will denote an arbitrary set of vectors in a vector space \(V\) by \(\boldsymbol{\Psi}\) as in
so that there exists a linear combination of the columns of \(\A\) equal to the zero vector. Then if \(x_i\ne0\) we can write the column \(\A_{*i}\) as a linear combination of the other columns of \(\A\text{:}\)
Thus in general, if a set of vectors has some nontrivial linear combination equal to \(\vec{0},\) the set of vectors is in some sense redundant. This redundancy is made precise in the next definition.
Definition5.4.1.Linear dependence.
Let \(V\) be a vector space. If \(\boldsymbol{\Psi}:=\left\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\right\}\) is a set of vectors in \(V\) satisfying
for some set \(c_1,c_2,\ldots,c_n\) of constants which are not all \(0,\) then we say the set \(\boldsymbol{\Psi}\) is linearly dependent.
Lemma5.4.2.Linear dependence and the expression of some vector in terms of the others.
Let \(V\) be a vector space. If \(\boldsymbol{\Psi}:=\left\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\right\}\subset V\) is linearly dependent then for some \(j\in\{1,2,\ldots,n\}\) and for some constants \(c_1,c_2,\ldots,c_n\in\R\) we have
Let \(V\) be a vector space and suppose \(\boldsymbol{\Psi}:=\left\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\right\}\) is linearly dependent. By Definition 5.4.1 fix \(j\in\{1,2,\ldots,n\}\) for which \(c_j\ne0\) in
only when \(c_1=c_2=\cdots=c_n=0\) then we say the set \(\boldsymbol{\Psi}\) is linearly independent.
Example5.4.4.
The set of vectors \(\left\{ \left(\begin{array}{r}1\\2\\1 \end{array}\right), \left(\begin{array}{r}0\\1\\3 \end{array}\right), \left(\begin{array}{r}2\\3\\-1 \end{array}\right)\right\}\) are linearly dependent since
Lemma5.4.5.Sets of canonical unit vectors are linearly independent.
Let \(\mathcal{\boldsymbol{E}}=\{\wh{e_1},\wh{e_2},\ldots,\wh{e_n}\}\subset\R^n\) be the set of canonical unit vectors in \(\R^n.\) Then any subset of \(\mathcal{\boldsymbol{E}}\) is linearly independent.
Proof.
The proof is a worksheet exercise.
Theorem5.4.6.\(\boldsymbol{N(\A)}\) and linear independence of the set of columns of \(\A\).
Let \(\A\in\R^{m\times n}.\) Then \(N(\A)=\{\vec{0}\}\) if and only if the set \(\left\{\A_{\ast j}\right\}_{j=1}^n\) of columns of \(\A\) is linearly independent.
Remark5.4.7.Determining linear independence of a finite set of vectors in \(\boldsymbol{\R^m}\).
Let \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\subset\R^m.\) Then \(\boldsymbol{\Psi}\) is linearly independent if and only if the matrix
Suppose \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) is a set of vectors in \(\R^m\) and construct \(\A\in\R^{m\times n}\) as in the statement of the theorem.
\((\Rightarrow):\) If \(\boldsymbol{\Psi}\) is linearly independent then by Definition 5.4.3 the only linear combination of the columns of \(\A\) equaling \(\vec{0}\) is the trivial linear combination, which by Definition 5.1.23 is exactly the statement that \(N(\A)=\left\{\vec{0}\right\}.\)
\((\Leftarrow):\) Suppose \(N(\A)=\left\{\vec{0}\right\}.\) Then by Definition 5.1.23 the only linear combination of the columns of \(\A\) equaling \(\vec{0}\) is the trivial linear combination, which by Definition 5.4.3 is exactly the statement that \(\boldsymbol{\Psi}\) is linearly independent.
Theorem5.4.8.Invertibility and linear independence of the set of columns of \(\A\).
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if the set \(\left\{\A_{\ast j}\right\}_{j=1}^n\) of columns of \(\A\) is linearly independent.
Proof.
The proof is a worksheet exercise.
Theorem5.4.9.Singular matrices guarantee range collisions.
An \(n\times n\) matrix \(\A\) is singular if and only if there exist distinct \(\u,\v\in\R^n\) for which \(\A\u=\A\v.\)
Proof.
\((\Rightarrow)\) Suppose \(\A\) is singular. Then by Theorem 5.4.8 the set of columns of \(\A\) is linearly dependent, so fix \(\vec{c}=(c_1,c_2,\ldots,c_n)^T\ne\vec{0}\) for which \(\A\vec{c}=\vec{0}.\) Now \(\A(2\vec{c})=2\vec{0}=\vec{0}\) and since \(\vec{c}\ne\vec{0},\,2\vec{c}\ne\vec{c}.\) We have therefore found two distinct vectors \(\u=\vec{c}\) and \(\v=2\vec{c}\) for which \(\A\u=\A\v.\)
\((\Leftarrow)\) Now suppose BWOC that distinct \(\u,\v\in\R^n\) satisfy \(\A\u=\A\v\) and fix such \(\u,\v.\) Then \(\A(\u-\v)=\vec{0}\) which, since \(\u-\v\ne\vec{0},\) precludes the linear independence of the set of columns of \(\A.\) By Theorem 5.4.8 we conclude that \(\A\) is singular.
Corollary5.4.10.Invertibility and linear independence of the set of rows of \(\A\).
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if the set \(\left\{\A_{i\ast}\right\}_{i=1}^n\) of rows of \(\A\) is linearly independent.
Proof.
The proof is a worksheet exercise.
Example5.4.11.Linear independence in \(\boldsymbol{\R}^n\).
To determine whether the set of vectors \(\boldsymbol{\Psi}=\left\{ \lmatrix{r}1\\1\\-2 \rmatrix, \lmatrix{r}2\\-1\\5 \rmatrix, \lmatrix{r}0\\6\\5 \rmatrix \right\}\) is linearly independent, we create the matrix \(\A =\lmatrix{rrr}1 \amp 2 \amp 0 \\ 1 \amp -1 \amp 6 \\ -2 \amp 5 \amp 5 \rmatrix\) and eliminate to \(\U=\lmatrix{rrr}1 \amp 2 \amp 0 \\ 0 \amp -3 \amp 6 \\ 0 \amp 0 \amp 23 \rmatrix.\) Since \(\U\) shows a full set of pivots we conclude by Theorem 3.10.19 that \(\A\) is nonsingular, so by Theorem 5.4.8 the set of columns of \(\A\) (and hence the set \(\boldsymbol{\Psi}\)) are linearly independent. Alternatively we could have computed the determinant of \(\A\) to check for invertibility.
Theorem5.4.12.Coefficients of linearly independent vectors are unique.
Let \(V\) be a vector space, suppose \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_k}\right\}\) is a linearly independent set and let \(\v\in \text{span}(\boldsymbol{\Psi}).\) In the representation
which by Definition 5.4.3 can only happen if all \(c_i'=c_i,\) since \(\boldsymbol{\Psi}\) is linearly independent. Thus the coefficients in (5.4.3) are indeed unique.
Theorem5.4.13.Any linearly independent set in \(\R^m\) has no more than \(m\) vectors.
Let \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\subset\R^m\) be linearly independent. Then \(m\ge n.\)
Proof.
Let \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\subset\R^m\) with \(m\lt n\) and form the matrix
Consider the equation \(\A\x=\vec{0}\) which is always consistent. We eliminate \(\A\x=\vec{0}\) to \(\J\x=\vec{0}\) which is also consistent and which exhibits \(n\) variables and at most \(m\lt n\) pivot variables, ensuring that there is at least one free variable in \(\J\x=\vec{0}.\) Setting this free variable equal to \(1\) yields a nontrivial solution \(\x\) to \(\J\x=\vec{0};\) by Corollary 3.4.7\(\,\x\) also solves \(\A\x=\vec{0}.\)
The components of \(\x\) are the coefficients in a nontrivial linear combination of the columns of \(\A,\) hence the vectors in \(\boldsymbol{\Psi},\) which equals \(\vec{0}.\) We conclude that Definition 5.4.1 is satisfied and \(\boldsymbol{\Psi}\) is linearly dependent.
Thus any linearly independent set in \(\R^m\) can contain no more than \(m\) vectors.
Theorem 5.4.13 can be generalized from \(\R^n\) to \(V\) but the proof is beyond the scope of this text.
Definition5.4.14.Span.
Let \(V\) be a vector space. The span of a finite set of vectors \(\boldsymbol{\Psi}=\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\}\) is the set of all linear combinations of the vectors in \(\boldsymbol{\Psi},\) written span\(\boldsymbol{\Psi}=\{a_1\vec{v_1}+a_2\vec{v_2}+\cdots+a_n\vec{v_n}\,|\,a_i \in \R\}.\) The set \(\boldsymbol{\Psi}\) is said to span \(V\) if \(V=\text{ span }\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\}.\) In this case we call \(\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\}\) a spanning set for \(V,\) and we say \(V\) is spanned by \(\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\}.\)
Remark5.4.15.\(\boldsymbol{\Psi}\) may be finite but its span is infinite.
Though a finite nontrivial set of vectors \(\boldsymbol{\Psi}=\{\vec{v_1},\vec{v_2},...,\vec{v_n}\}\) is indeed finite, its span
Theorem5.4.17.Invertibility and the span of the set of columns of \(\A\).
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if the set \(\left\{\A_{\ast j}\right\}_{j=1}^n\) of columns of \(\A\) spans \(\R^n.\)
Corollary5.4.18.Invertibility and the span of the set of rows of \(\A\).
Let \(\A\in\R^{n\times n}.\) Then \(\A\) is invertible if and only if the set \(\left\{\A_{i\ast}\right\}_{i=1}^n\) of rows of \(\A\) spans \(\R^n.\)
Proof.
The proof is a worksheet exercise.
Most questions about both span and linear independence for sets of vectors in \(\R^m\) are answered by solving elimination problems with two notable but easy exceptions:
If \(m\lt n\) the set \(\boldsymbol{\Psi}=\{\vec{v_1},\vec{v_2},...,\vec{v_n}\}\subset\R^m\) is linearly dependent by Theorem 5.4.13.
If \(m\gt n,\,\)Theorem 5.4.24 guarantees that \(\boldsymbol{\Psi}\) is not a spanning set since it has too few vectors.
Otherwise, whether we are asked about linear independence or span of some set \(\boldsymbol{\Psi}=\{\vec{v_1},\vec{v_2},...,\vec{v_n}\}\subset\R^m\) (perhaps with \(m=n\)), we always start the same way, by constructing a matrix \(\A\) from the vectors in \(\boldsymbol{\Psi}:\)
If we are asked to determine whether \(\boldsymbol{\Psi}=\{\vec{v_1},\vec{v_2},...,\vec{v_n}\}\subset\R^m\) is linearly independent, we have two cases.
If \(m=n\) we may avail ourselves of any of The Big Theorem results to this point, by showing that \(\A\) is invertible in any valid way we wish and invoking Theorem 5.4.8. Elimination to \(\U\) followed by counting pivots is often the most efficient but taking the determinant is also popular.
If \(m\gt n\) The Big Theorem is unavailable since inverses are not defined, so one approach is to determine whether \(N(\A)=\{\vec{0}\}.\) We eliminate \(\A\) to \(\U\) and count pivots. If there are fewer than \(n\) pivots in \(\U\) then setting any free variable in \(\U\x=\vec{0}\) equal to \(1\) and finding the remaining components of \(\x\) yields a nontrivial solution to the homogeneous equation \(\A\x=\vec{0}.\) This is by Definition 5.4.3 exactly the statement that the set \(\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\}\) is linearly dependent.
If we are asked to determine whether \(\boldsymbol{\Psi}=\{\vec{v_1},\vec{v_2},...,\vec{v_n}\}\subset\R^m\,\) spans some \(\,\R^n,\) we again have a few cases.
If \(m=n\) the question of whether or not \(\boldsymbol{\Psi}\) spans \(\R^m\) is exactly the question of whether, for arbitrary \(\b\in\R^m,\,\A\x=\b\) has a solution, which by Theorem 3.10.1 is true if and only if \(\A\) is invertible. In this case we find ourselves in the same spot as when we test for linear independence when \(m=n\) and of course we may use the same methods.
If \(m\lt n\) we determine whether or not \(\boldsymbol{\Psi}\) spans \(\R^m\) by fixing an arbitrary \(\b\in\R^m\) and eliminating \(\left(\A|\b\right)\) to \(\left(\U|\c\right)\) where as a result of the elimination process the components of \(\c\) will be linear combinations of the components of \(\b.\) Now \(\A\x=\b\) is consistent, and hence span\((\boldsymbol{\Psi})=\R^m,\) if and only if \(\U\x=\c\) is consistent. Theorem 3.1.38 gives a characterization of consistent wide row-echelon systems: such systems eliminate to \(\U\x=\c\) showing a full set of pivots, or failing that, with only full zero rows / no LHS-zero rows after elimination.
To show that \(\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n}\}\) does not span some vector space \(V,\) we find a particular \(\vec{b_0}\in\R^m\) for which \(\A\x=\vec{b_0}\) is inconsistent. For wide systems this means eliminating \(\left(\A|\vec{b_0}\right)\) to \(\left(\U|\vec{c_0}\right)\) which exhibits a LHS-only zero row. Such a \(\vec{b_0}\) may be found by setting \(\vec{b_0}=\begin{pmatrix}b_1\\b_2\\ \vdots\\b_n\end{pmatrix}\) before elimination, and then after elimination setting the components of \(\vec{b_0}\) such that the RHS across the separator from a zero row in \(\U\) is nonzero, implying a contradiction.
Example5.4.19.
Determine if \(\v=\left(\begin{array}{r} 5\\-6\\-8 \end{array}\right)\) is in the span of the set of vectors \(\boldsymbol{\Psi}=\left\{ \begin{pmatrix} 1\\0\\2 \end{pmatrix}, \left(\begin{array}{r} -1\\3\\7 \end{array}\right) \right\}.\)
We create the matrix \(\A=\left(\begin{array}{rr} 1 \amp -1 \\ 0 \amp 3\\ 2 \amp 7\end{array}\right);\) all we need to do is to demonstrate whether the system is consistent. We eliminate on the augmented matrix \(\left(\begin{array}{rr|r} 1 \amp -1 \amp 5\\ 0 \amp 3 \amp -6\\ 2 \amp 7 \amp -8\end{array}\right)\) to get \(\left(\begin{array}{rr|r} 1 \amp -1 \amp 5\\ 0 \amp 3 \amp -6\\ 0 \amp 0 \amp 0\end{array}\right).\) Since this shows a full set of pivots, by Corollary 3.1.12 the row-echelon system is consistent and by Theorem 3.1.29 the original system is also consistent. We conclude that \(\v=\left(\begin{array}{r} 5\\-6\\-8 \end{array}\right)\) is in the span of \(\boldsymbol{\Psi}=\left\{ \begin{pmatrix} 1\\0\\2 \end{pmatrix}, \left(\begin{array}{r} -1\\3\\7 \end{array}\right) \right\}.\)
Example5.4.20.
To determine if the vector \(\b:=\begin{pmatrix}8\\6\\8\end{pmatrix}\) is in the span of the set \(\boldsymbol{\Psi}=\left\{\begin{pmatrix}1\\2\\0\end{pmatrix},\begin{pmatrix} 3\\0\\4\end{pmatrix} \right\}\) we form the augmented matrix \((\A|\b)=\left( \begin{array}{cc|c} 1 \amp 3 \amp 8 \\ 2 \amp 0 \amp 6 \\ 0 \amp 4 \amp 8 \end{array}\right).\) After elimination we find \(\left( \begin{array}{rr|r} 1 \amp 3 \amp 8 \\ 0 \amp -6 \amp -10 \\ 0 \amp 0 \amp \frac{4}{3} \end{array}\right)\) which gives the inconsistent row \(0 = \frac{4}{3}.\) We conclude that \(\b\) is not in span\(\boldsymbol{\Psi}.\)
Example5.4.21.
To determine if the set \(\boldsymbol{\Psi}=\left\{\begin{pmatrix}1\\3\\5\end{pmatrix},\begin{pmatrix}2\\1\\9\end{pmatrix},\begin{pmatrix}4\\4\\14\end{pmatrix},\begin{pmatrix}7\\0\\6\end{pmatrix}\right\}\) spans \(\R^3\) we fix arbitrary \(\b=\begin{pmatrix}b_1\\b_2\\b_3\end{pmatrix}\) and form the augmented matrix \(\left(\A|\b\right)=\left(\begin{array}{cccc|c} 1 \amp 2 \amp 4 \amp 7 \amp b_1 \\ 3 \amp 1 \amp 4 \amp 0 \amp b_2 \\ 5 \amp 9 \amp 14 \amp 6 \amp b_3\end{array}\right).\) After elimination we find \(\left(\begin{array}{rrrr|l} 1 \amp 2 \amp 4 \amp 7 \amp b_1 \\ 0 \amp 5 \amp 8 \amp 21 \amp 3b_2-b_1 \\ 0 \amp 0 \amp 22 \amp 124 \amp 22b_1+b_2-5b_3\end{array}\right)\) which is consistent by Corollary 3.1.12 since it shows a full set of pivots. Since \(\b\) is arbitrary we conclude that span\(\boldsymbol{\Psi}=\R^3.\)
Example5.4.22.
To determine if the set \(\boldsymbol{\Psi}=\left\{\begin{pmatrix}1\\2\\0\end{pmatrix},\begin{pmatrix}3\\0\\4\end{pmatrix},\begin{pmatrix}4\\2\\4\end{pmatrix},\begin{pmatrix}5\\4\\4\end{pmatrix}\right\}\) spans \(\R^3\) we form the augmented matrix \(\left(\A|\b\right)=\left(\begin{array}{cccc|c} 1 \amp 3 \amp 4 \amp 5 \amp b_1 \\ 2 \amp 0 \amp 2 \amp 4 \amp b_2 \\ 0 \amp 4 \amp 4 \amp 4 \amp b_3\end{array}\right).\) After elimination we find \(\left(\begin{array}{cccc|l} 1 \amp 3 \amp 4 \amp 5 \amp b_1 \\ 0 \amp 6 \amp 6 \amp 6 \amp 2b_1-b_2 \\ 0 \amp 0 \amp 0 \amp 0 \amp 4b_1-2b_2-3b_3\end{array}\right)\) which is inconsistent if \(4b_1-2b_2-3b_3\ne0;\) setting \(b_1=1,b_2=b_3=0\) ensures this. Thus \(\vec{b_0}=\left(\begin{array}{c}1\\0\\0\end{array}\right)\) is therefore not in span\(\boldsymbol{\Psi},\) and we conclude that span\(\boldsymbol{\Psi}\ne\R^3.\)
Lemma5.4.23.No linear combination of vectors in \(\boldsymbol{\Psi}\) can express a nonzero vector that is orthogonal to all vectors in \(\boldsymbol{\Psi}\).
Let \(V\) be an inner product space, let \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\subset V\) and suppose \(\u\in V\setminus\{\vec{0}\}\) satisfies \(\ip{\u}{\vec{v_i}}\) for each of the \(\vec{v_i}\in\boldsymbol{\Psi}.\) Then \(\u\not\in\text{ span}\boldsymbol{\Psi}.\)
Proof.
Suppose the hypotheses and suppose BWOC that there exists a linear combination of vectors in \(\boldsymbol{\Psi}\) equal to \(\u:\)
which is a contradiction since \(\u\ne\vec{0}\) guarantees \(\ip{\u}{\u}>0.\) Thus our sassumption that there exists a linear combination of vectors in \(\boldsymbol{\Psi}\) equal to \(\u\) is false.
Theorem5.4.24.Any spanning set for \(\R^m\) must contain at least \(m\) vectors.
If \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\subset\R^m\) spans \(\R^m\) then \(m\le n.\)
Proof.
Let \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\subset\R^m\) with \(n\lt m\) and form the matrix
Consider the equation \(\A^T\x=\vec{0}\) which is always consistent. We eliminate \(\A^T\x=\vec{0}\) to \(\J\x=\vec{0}\) which is also consistent and which exhibits \(m\) variables and at most \(n\lt m\) pivot variables, ensuring that there is at least one free variable in \(\J\x=\vec{0}.\) Setting this free variable equal to \(1\) (or any nonzero value) yields a nontrivial solution \(\x\in\R^m\) to \(\J\x=\vec{0};\) by Corollary 3.4.7\(\,\x\) also solves \(\A^T\x=\vec{0}.\)
This shows that \(\x\) is orthogonal to each of the rows of \(\A^T\) and hence to the columns of \(\A.\) Now by Lemma 5.4.23\(\x\) cannot be written as a linear combination of the vectors in \(\boldsymbol{\Psi}\) and is therefore not in span\(\left(\boldsymbol{\Psi}\right),\) so \(\boldsymbol{\Psi}\) does not span \(\R^m.\)
Like Theorem 5.4.13, Theorem 5.4.24 can also be generalized from \(\R^n\) to \(V\) but the proof is beyond the scope of this text.
Definition5.4.25.Bases.
Let \(V\) be a vector space. Then a basis for \(V\) is a linearly independent set \(\mathcal{\B}\) which spans \(V;\) that is a basis is a linearly independent spanning set).
Example5.4.26.The simplest basis for \(\R^2\).
The set \(\left\{\begin{pmatrix}{1}\\0 \end{pmatrix} ,\begin{pmatrix}{0}\\1 \end{pmatrix} \right\}\) is a basis for \(\R^2\) as follows. First, for any \(\begin{pmatrix}a\\b \end{pmatrix} \in\R^2\) we can write \(\begin{pmatrix}a\\b \end{pmatrix} =a\begin{pmatrix}{1}\\0 \end{pmatrix} +b\begin{pmatrix}{0}\\1 \end{pmatrix},\) so that by Definition 5.4.25\(\left\{\begin{pmatrix}{1}\\0 \end{pmatrix} ,\begin{pmatrix}{0}\\1 \end{pmatrix} \right\}\) spans \(\R^2.\) Second, the set \(\left\{\begin{pmatrix}{1}\\0 \end{pmatrix} ,\begin{pmatrix}{0}\\1 \end{pmatrix} \right\}\) is linearly independent by Definition 5.4.3 since setting \(\begin{pmatrix}1\amp 0\\0\amp1\end{pmatrix}\begin{pmatrix}x_1\\ x_2\end{pmatrix}=\begin{pmatrix}0\\0\end{pmatrix}\) immediately yields \(x_1=x_2=0.\) By Definition 5.4.25 we conclude that the set \(\left\{\begin{pmatrix}{1}\\0 \end{pmatrix} ,\begin{pmatrix}{0}\\1 \end{pmatrix} \right\}\) is a basis for \(\R^2.\)
Remark5.4.27.Bases for a given vector space are highly non-unique.
That is, there are many, often an infinite number, of distinct bases for any vector space. For example, both \(\left\{\begin{pmatrix}1\\0 \end{pmatrix},\,\begin{pmatrix}0\\1\end{pmatrix} \right\}\) and \(\left\{\begin{pmatrix}1\\1 \end{pmatrix} ,\,\lmatrix{r}1\\-1\rmatrix\right\}\) are bases for \(V=\R^2.\) It is left to the reader to show the latter is a basis for \(\R^2.\)
Theorem5.4.28.Bases never contain \(\vec{0}\).
Let \(V\) be a vector space and \(\mathcal{\B}\) a basis for \(V.\) Then \(\vec{0}\not\in\mathcal{\B}.\)
is a basis for \(V.\) Then \(0\cdot\vec{v_1}+0\cdot\vec{v_2}+\ldots+\vec{v_n}+1\cdot\vec{0}\) is a nontrivial linear combination of vectors in \(\{\vec{v_1},\,\vec{v_2},\,\ldots,\vec{v_n},\,\vec{0}\}\) which is equal to \(\vec{0},\) from which we conclude that \(\mathcal{\B}\) is not linearly independent by Definition 5.4.3 and hence not a basis by Definition 5.4.25.
Theorem5.4.29.No basis for the trivial vector space.
The trivial vector space \(V=\left\{\vec{0}\right\}\) has no basis.
Proof.
The proof is a worksheet exercise.
Theorem5.4.30.Any set of \(n\) linearly independent vectors in \(\R^n\) is a basis for \(\R^n\).
Let \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) be a set of linearly independent vectors in \(\R^n.\) Then \(\boldsymbol{\Psi}\) is a basis for \(\R^n.\)
Proof.
Let \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) be a set of \(n\) linearly independent vectors in \(\R^n\) and form the matrix
By construction the set of columns of \(\A\) is linearly independent so by Theorem 5.4.6\(\,N(\A)=\left\{\vec{0}\right\}.\) Now by Theorem 5.1.25, \(\,\A\) is invertible and hence by Theorem 3.10.1 the set of columns of \(\A\) spans \(\R^n.\) By Definition 5.4.25 we conclude that \(\boldsymbol{\Psi}\) is a basis for \(\R^n.\)
Theorem5.4.31.The canonical basis for \(\R^n\).
The set of canonical unit vectors form a basis for \(\R^n.\) (This basis is called the canonical basis of \(\R^n.\))
Proof.
The set \(\{\wh{e_1},\wh{e_2},\ldots,\wh{e_n}\}\) of canonical unit vectors in \(\R^n\) is linearly independent by Definition 5.4.1 because
guarantees that each of the \(c_i\) are \(0.\) So, by Theorem 5.4.30\(\{\wh{e_1},\wh{e_2},\ldots,\wh{e_n}\}\) is a basis for \(\R^n.\)
Remark5.4.32.Determining whether a given \(\boldsymbol{\Psi}\) is a basis for its span.
Let \(V\) be a vector space and let \(\boldsymbol{\Psi}\) be some finite subset of \(V.\) If \(\boldsymbol{\Psi}\) is linearly independent, it is a basis for its span. To prove that \(\boldsymbol{\Psi}\) is linearly independent we must show that non nontrivial linear combination of the vectors in \(\boldsymbol{\Psi}\) equals \(\vec{0}.\)
When \(\boldsymbol{\Psi}\subset\R^m\) we always start our analysis by creating a matrix \(\A\) whose columns are the vectors in \(\boldsymbol{\Psi}.\) To determine whether or not the columns of \(\A\) are linearly independent, by Theorem 5.4.8 we need only determine whether or not \(\A\) is invertible, which opens up all accumulated results in The Big Theorem for our use.
Corollary5.4.33.\(\boldsymbol{\mathcal{B}}\) is a basis if and only if every vector can be written uniquely as a linear combination of the vectors in \(\boldsymbol{\mathcal{B}}\).
Let \(V\) be a vector space. \(\boldsymbol{\mathcal{B}}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) is a basis for \(V\) if and only if every vector \(\v\in V\) can be written uniquely as a linear combination of the vectors in \(\boldsymbol{\mathcal{B}}.\)
Proof.
\((\Rightarrow)\) Suppose \(\boldsymbol{\mathcal{B}}\) is a basis for \(V\) and let \(\v\in V.\) Since bases are linearly independent, by Theorem 5.4.12 the coefficients in \(\v=c_1\vec{v_1}+c_2\vec{v_2}+\cdots+c_n\vec{v_n}\) are unique.
Now suppose that every \(\v\in V\) can be expressed uniquely as \(\v=c_1\vec{v_1}+c_2\vec{v_2}+\cdots+c_n\vec{v_n}; \) then \(\boldsymbol{\mathcal{B}}\) spans \(V.\) Then in particular the zero vector \(\vec{0}\) can be expressed uniquely as a linear combination of vectors in \(\boldsymbol{\mathcal{B}},\) and we already know that
so this is the only representation of \(\vec{0}\) as a linear combination of the vectors in \(\boldsymbol{\mathcal{B}}.\) This shows that \(\boldsymbol{\mathcal{B}}\) satisfies Definition 5.4.3 so \(\boldsymbol{\mathcal{B}}\) is linearly independent and hence a basis for \(V\text{.}\)
Theorem5.4.34.Linear independence and span of sets of \(n\) vectors in \(\R^n\).
Let \(\boldsymbol{\Psi}=\{\vec{v_1},\vec{v_2},...,\vec{v_n}\}\subset\R^n.\) Then \(\boldsymbol{\Psi}\) is linearly independent if and only if \(\boldsymbol{\Psi}\) spans \(\R^n.\)
Proof.
The proof is a homework exercise.
Theorem5.4.35.Invertibility equates to whether the set of columns is a basis for \(\R^n\).
\(\A\in\R^{n\times n}\) is invertible if and only if the set of columns of \(\A\) is a basis for \(\R^n.\)
Proof.
The proof is a homework exercise.
Theorem5.4.36.Invertibility equates to whether the set of rows is a basis for \(\R^n\).
\(\A\in\R^{n\times n}\) is invertible if and only if the set of rows of \(\A\) is a basis for \(\R^n.\)
Proof.
The proof is a homework exercise.
Corollary5.4.37.Bases for \(\R^n\) Part 1.
A set \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) is a basis for \(\R^n\) if and only if \(\boldsymbol{\Psi}\) is linearly independent.
Proof.
The proof is a worksheet exercise.
Corollary5.4.38.Bases for \(\R^n\) Part 2.
A set \(\boldsymbol{\Psi}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) is a basis for \(\R^n\) if and only if \(\boldsymbol{\Psi}\) spans \(\R^n.\)
Proof.
The proof is a worksheet exercise.
Example5.4.39.Bases for \(\R^n\).
To show that \(\boldsymbol{\Psi}=\left\{\begin{pmatrix} 1\\1\\2 \end{pmatrix},\left(\begin{array}{r}-1\\0\\-1\end{array}\right),\left(\begin{array}{r}1\\1\\-2\end{array}\right)\right\}\) is a basis for \(\R^3\) we may use Corollary 5.4.37 or Corollary 5.4.38.
To use Corollary 5.4.37 we show that the vectors in \(\boldsymbol{\Psi}\) are linearly independent by installing them as the columns of a matrix \(\A=\left( \begin{array}{rrr} 1 \amp -1 \amp 1 \\ 1 \amp 0 \amp 1 \\2 \amp -1 \amp -2 \end{array} \right)\) which eliminates to \(\left( \begin{array}{rrr} 1 \amp -1 \amp 1 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp -4 \end{array} \right).\) Since \(\U\) shows a full set of pivots we conclude by Theorem 3.10.19 that \(\A\) is nonsingular, so by Theorem 5.4.8 the set of columns of \(\A\) (and hence the set \(\boldsymbol{\Psi}\)) is linearly independent. By Corollary 5.4.37\(\,\boldsymbol{\Psi}\) is a basis for \(\R^3.\)
To use Corollary 5.4.38 we let \(\v=\begin{pmatrix} b_1\\b_2\\b_3\end{pmatrix}\) be an arbitrary vector in \(\R^3\) and form the augmented matrix \(\left( \begin{array}{rrr|c} 1 \amp -1 \amp 1 \amp b_1 \\ 1 \amp 0 \amp 1 \amp b_2 \\2 \amp -1 \amp -2 \amp b_3 \end{array} \right).\) After elimination we find \(\left( \begin{array}{rrr|l} 1 \amp -1 \amp 1 \amp b_1 \\ 0 \amp 1 \amp 0 \amp b_2-b_1 \\ 0 \amp 0 \amp -4 \amp b_3-b_2-b_1 \end{array} \right)\) which is a consistent system for all choices of \(b_1,b_2,b_3.\) Thus span\(\boldsymbol{\Psi}=\R^3.\)
Since the set of vectors in \(\boldsymbol{\Psi}\) are linearly independent and span \(\R^3\) they form a basis for \(\R^3.\)
Remark5.4.40.Determining whether a given \(\boldsymbol{\Psi}\) is a basis for a general vector space \(V\).
Generally, if we suspect that a set \(\boldsymbol{\Psi}\) is a basis for a vector space \(V,\) we try to prove that \(\boldsymbol{\Psi}\) is linearly independent and that span\(\boldsymbol{\Psi}=V.\) If we succeed in proving both assertions, we conclude that \(\boldsymbol{\Psi}\) is indeed a basis for \(V.\) If on the other hand we suspect that a set \(\boldsymbol{\Psi}\) is not a basis for a vector space \(V,\) we try to prove that \(\boldsymbol{\Psi}\) is linearly dependent, or that \(\boldsymbol{\Psi}\) spans only a proper subset of \(V.\) If we succeed in proving either assertion, we conclude that \(\boldsymbol{\Psi}\) is not a basis for \(V.\)
The processes by which we may accomplish any of the tasks described in the paragraph above depend on the type of elements in \(V;\) somewhat different approaches may be used for vector spaces of polynomials than for vector spaces of matrices, for example.
The definitions for linear independence, span, and bases apply to all vector spaces. For example, it can be shown that the set \(\{ 1, x, x^2,...,x^n\}\) is a basis for the vector space \(\mathcal{P}_n:=\{f(x)= a_0 + a_1x+a_2x^2 + \cdots+ a_nx^n\mid a_i \in \R\}\) of polynomials of degree less than or equal to \(n.\) However, we will restrict our examples to the most common vector spaces \(\R^m\) which are rich with examples and give important results.
Lemma5.4.41.Appending any \(\v\in\boldsymbol{V}\) to a basis for \(\boldsymbol{V}\) yields a linearly dependent set.
Let \(V\) be a vector space, let \(\mathcal{B}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) be a basis for \(V\) and let \(\u\in V.\) Then
Theorem5.4.42.Any set with more vectors than some basis is linearly dependent.
Let \(V\) be a vector space, let \(\mathcal{B}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) be a basis for \(V\) and let \(\boldsymbol{\Psi}=\left\{\vec{u_1},\vec{u_2},\ldots,\vec{u_m}\right\}\) be a set in \(V\) with \(m\gt n.\) Then \(\mathcal{\Psi}\) is linearly dependent.
Proof.
Suppose the hypotheses. Since \(\mathcal{B}\) is a basis for \(V\) each of the elements of \(\Psi\) we may write them each as a linear combination of the elements of \(\mathcal{B},\) using coefficients with two indices:
We prove that the set of rows of \(\C\) is linearly dependent and use that to show that \(\boldsymbol{\Psi}\) is linearly dependent. The matrix \(\C^T\) is wide so the consistent equation \(\C^T\x=\vec{0}\) must eliminate to consistent wide \(\J\x=\vec{0}.\) By Corollary 3.4.7 we are assured that there is at least one nontrivial solution \(\x\in\R^n\) to \(\C^T\x=\vec{0}\) which by Definition 5.4.3 guarantees that the set of rows of \(\C\) is linearly dependent. Fix such a nontrivial \(\x=\begin{pmatrix}x_1\\x_2\\\vdots\\x_m\end{pmatrix}.\)
Although the vectors in \(\boldsymbol{\Psi}\) and \(\mathcal{B}\) are not column vectors of real numbers, we may use matrix notation for convenience in collecting results as follows. We may represent (5.4.4) faithfully as
As \(\x\ne\vec{0},\) we have found a nontrivial linear combination of the vectors in \(\boldsymbol{\Psi}\) which equals \(\vec{0},\) so by Definition 5.4.1\(\,\boldsymbol{\Psi}\) is linearly dependent.
Although bases are not unique, the number of vectors in every basis for a given vector space must be the same.
Theorem5.4.43.All bases of \(V\) contain the same number of vectors.
Let \(V\) be a vector space and suppose \(\mathcal{\B_1}=\left\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\right\}\) and \(\mathcal{\B_2}=\left\{\vec{u_1},\vec{u_2},\ldots,\vec{u_m}\right\}\) are bases for \(V.\) Then \(m=n.\)
Proof.
The proof is a worksheet exercise.
Definition5.4.44.Dimension of a vector space.
The dimension of a nontrivial vector space \(V,\) written \(\dim V,\) is the number of vectors in any basis of \(V.\) For the trivial vector space we define dim\(\{\vec{0}\}:=0.\)
Remark5.4.45.
There are infinite-dimensional vector spaces such as, for example, the vector space of real sequences. We will concentrate solely on finite-dimensional vector spaces in this text. So when we say “Let \(V\) be a vector space” we always mean “Let \(V\) be a finite-dimensional vector space.”
Theorem5.4.46.Every spanning set contains a basis.
Let \(V\) be a nontrivial vector space and suppose \(\boldsymbol{\Psi}=\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\}\) spans \(V.\) Then there exists a basis \(\mathcal{\B}\subseteq\boldsymbol{\Psi}.\)
Proof.
Suppose \(V\) is a nontrivial vector space and assume \(\boldsymbol{\Psi_0}=\{\vec{v_1},\vec{v_2},\ldots,\vec{v_n}\}\) spans \(V.\) If \(\boldsymbol{\Psi_0}\) is linearly independent by Definition 5.4.25 we are done. If not, \(\boldsymbol{\Psi_0}\) is linearly dependent, so by Lemma 5.4.2 for some \(j\in\{1,2,\ldots,n\},\,\vec{v_j}\) can be written as a linear combination of the other vectors in \(\boldsymbol{\Psi}.\) Fix such a \(j\) and define
By Lemma 5.4.16\(\,\text{span}\boldsymbol{\Psi_1}=\text{span}\boldsymbol{\Psi_0}.\) Now either \(\boldsymbol{\Psi_1}\) is linearly independent in which case \(\boldsymbol{\Psi_1}\) is a basis and we are done, or \(\boldsymbol{\Psi_1}\) is linearly dependent in which case we carry out another step of identifying a redundant vector, casting it out, and invoking Lemma 5.4.16 on the remaining set.
We repeat this process until we obtain a linearly independent set which spans \(V.\) This process is guaranteed to successfully terminate after casting out at most \(n-1\) since \(\dim V\ge1.\) The remaining \(\boldsymbol{\Psi_k}\) is a basis for \(V.\)
Corollary5.4.47.The dimension of \(\R^n\) is as expected.
\(\dim\R^n=n.\)
Proof.
The proof is a worksheet exercise.
Note that if \(\A\in\R^{n\times n}\) is invertible, then \(N(\A)=\{\vec{0}\},\)\(\dim N(\A)=0\) and by Theorem 5.4.29 there is no basis for \(N(\A).\) But sometimes even when \(\A\) is not square and invertible we may have \(N(\A)=\{\vec{0}\},\)\(\dim N(\A)=0\) and there is no basis for \(N(\A).\) How can this happen? Exactly when \(\A\in\R^{m\times n},\,m>n,\) and the columns of \(\A\) are linearly independent as in
You should try to think of more creative examples (no \(0\)’s or \(1\)’s, say) of this circumstance. However, the interesting case of \(N(\A)\) is when \(\A\in\R^{m\times n},\,m\lt n\) (and perhaps there are zero rows after elimination but not necessarily so). Then there are guaranteed to be free variables, and the number of them is exactly \(n-r\) (compare with the matrix in (6.1.3) above).
Although basis elements cannot be parallel (why not?), they can be orthogonal. In fact, the coefficients in the expression of an arbitrary \(\v\in V\) in terms of orthogonal basis elements are easy to find as we shall see.
Theorem5.4.48.Orthogonal sets are linearly independent.
Let \(V\) be an inner product space. If \(\boldsymbol{\Phi}\) is an orthogonal set, then \(\boldsymbol{\Phi}\) is linearly independent.
Proof.
The proof is a homework exercise.
Definition5.4.49.Orthogonal and orthonormal bases.
An orthogonal basis for an inner product space \(V\) is a basis for \(V\) which is an orthogonal set. An orthonormal basis for an inner product space \(V\) is a basis for \(V\) which is an orthonormal set.
Theorem5.4.50.Expressing \(\v\in V\) by orthogonal basis elements.
Let \(V\) be a vector space, let \(\v\in V\) and suppose
To find the \(c_i\)’s we take inner products and use Theorem 2.4.2 and the fact that \(\left\{\vec{u_1},\vec{u_2},\ldots,\vec{u_n}\right\}\) is an orthogonal set: