Skip to main content

Section 5.1 Vector Spaces: Fundamentals

A vector space is an algebraic structure with certain useful properties. Essentially, a vector space is a set of objects which respects linearity, meaning that CLC is its fundamental attribute. In this chapter we cover general vector spaces and specialize to the vector space \(\R^n,\) and inner product spaces which are a generalizations of vector spaces.

Subsection 5.1.1 Vector Spaces


Remark 5.1.1. Fields.

Consider a set \({\Bbb F}\) and suppose two binary operations denoted \(+,\cdot\) (which do not necessarily represent addition and multiplication, it is important to note), are defined on \({\Bbb F}.\) If the field axioms below are satisfied for all \(a,b,c\in{\Bbb F}\) then \({\Bbb F}(+,\cdot)\) is a field. 
  1. Associativity of addition and multiplication: \(\,a+(b+c)=(a+b)+c\) and \(a\cdot(b\cdot c)=(a\cdot b)\cdot c.\)
  2. Commutativity of addition and multiplication: \(\,a+b=b+a\) and \(a\cdot b=b\cdot a.\)
  3. Existence of additive and multiplicative identities: \(\exists\,0,1\in{\Bbb F}\) with \(a+0=a\) and \(a\cdot 1=a.\)
  4. Additive inverses: There exists \(-a\in{\Bbb F}\) for which \(a+(-a)=0.\)
  5. Multiplicative inverses: For nonzero \(a,\) there exists \(a^{-1}\in{\Bbb F}\) for which \(a\cdot a^{-1}=1.\)
  6. Distributivity of multiplication over addition: \(\,a\cdot(b+c)=a\cdot b+a\cdot c.\)
In a field we may use juxtaposition rather than \(\cdot\) to denote multiplication.

Definition 5.1.2. Vector space.

Let \({\Bbb F}\) be a field and let \(V\) be a set on which two binary operations, vector addition
\begin{equation*} \boldsymbol{+}\,\,\,:V\times V\rightarrow V,\,\,(\u,\v)\mapsto\u+\v \end{equation*}
and scalar multiplication
\begin{equation*} \boldsymbol{\cdot}\,\,\,:{\Bbb F}\times V\rightarrow V,\,\,(a,\v)\mapsto a\v \end{equation*}
are defined, under which the set \(V\) is closed and for which the following eight axioms are satisfied.
  1. Associativity of vector addition: For all \(\u,\v,\w\in V,\) \(\,(\u+\v)+\w=\u+(\v+\w).\)
  2. Commutativity of vector addition: For all \(\u,\v\in V,\) \(\,\u+\v=\v+\u.\)
  3. Additive identity: There exists a unique element \(\vec{0}\in V\) such that for all \(\v\in V,\,\v+\vec{0}=\v.\)
  4. Additive inverses: For all \(\v\in V\) there exists a unique element \(-\v\in V\) such that \(-\v+\v=\vec{0}.\)
  5. Scalar multiplication identity: For all \(\v\in V,\,1\v=\v.\)
  6. Scalar multiplication compatibility with field multiplication: For all \(\v\in V\) and for all \(a,b\in{\Bbb F},\,a(b\v)=(ab)\cdot\v.\)
  7. Scalar multiplication distributes over vector addition: For all \(\u,\v\in V\) and for all \(a\in{\Bbb F},\,a(\u+\v)=a\u+a\v.\)
  8. Scalar multiplication distributes over field addition: For all \(\v\in V\) and for all \(a,b\in{\Bbb F},\,(a+b)\v=a\v+b\v.\)
When the context is clear, we will abuse notation a bit and denote the vector space \(V({\Bbb F},+,\cdot)\) simply by \(V.\)

Remark 5.1.3. We will use \({\Bbb F}=\R\).

We need the notion of field at least in the background in support of the notion of vector space; the field is where the coefficients of all linear combinations come from. 
Though it is interesting to allow for general fields \({\Bbb F},\) or specific fields like \({\Bbb C}\) or \(\Z\) or \({\Bbb F}_2=\left(\left\{0,1\right\},+,\cdot\right)\) in this course, we will work exclusively with real vector spaces, for which \({\Bbb F}=\R\) (though it may appear that we are working with \(\Q\) since we will only rarely encounter irrational numbers).

Remark 5.1.4. Vectors generalized.

Note that the word “vector” has a much more broad connotation than in Chapter 2 - in this chapter vectors can be matrices or functions or other objects, as long as the set of objects is shown, under addition and multiplication, to comprise a vector space over the field \(\R.\)

Definition 5.1.5. CLC generalized.

Suppose \(S\) is a subset of a vector space \(V.\) We say that \(S\) is closed under the taking of linear combinations, or CLC if for all \(a,b\in\R,\) and for all \(\u,\v\in S,\) \(\,a\u+b\v\in S.\)

Remark 5.1.6.

Note that since a vector space must be closed under both vector addition and scalar multiplication, it follows immediately that if \(V\) is known to be a vector space, then it is closed under the taking of linear combinations (CLC).

Example 5.1.8. \(\boldsymbol{\R^n}\) is a vector space over \(\boldsymbol{\R}\).


Proof: Let \(\u=\lmatrix{c} x_1\\x_2\\ \vdots \\ x_n \rmatrix, \v =\lmatrix{c} y_1\\y_2\\ \vdots \\ y_n \rmatrix, \w=\lmatrix{c} z_1\\z_2\\ \vdots \\ z_n \rmatrix \in \R^n, r \in \R.\) Then \(x_i,y_i, z_i \in \R\) for \(i=1..n.\) 
We show all of the properties of a vector space hold, many of which will follow directly from our assumed background knowledge (ABK) in Section A.1 about operations on real numbers.
Closure: First note that \(x_i+y_i, rx_i \in R\) since \(\R\) is closed under addition and multiplication by ABK Item 2. This means that \(\u+\v, r\u \in \R^n\) and so \(\R^n\) is closed under vector addition and scalar multiplication.
  1. (Commutativity of vector addition): Since \(x_i+y_i=y_i+x_i\) by the commutativity of addition over \(\R\) ABK Item 3 it follows that \(\u+\v=\v+\u\text{,}\) thus vector addition is commutative.
  2. (Associativity of vector addition): Since \((x_i+y_i)+z_i=x_i+(y_i+z_i)\) by the associativiy of addition over \(\R\) ABK Item 4 then \((\u+\v)+\w=\u+(\v+\w)\text{,}\) thus vector addition is associative.
  3. (Additive identity): Let \(\vec{0}=\lmatrix{c} 0\\0\\ \vdots \\ 0 \rmatrix \in \R^n\text{,}\) then since \(0+x_i=x_i+0\) for all \(x_i\in \R\) by ABK Item 19 we see that \(\u+\vec{0}=\vec{0}+\u\) thus \(\vec{0}\) is the additive identity in \(\R^n.\)
  4. (Additive inverses): We know \(-\u \in \R^n\) by closure of scalar multiplication above. We see that \(\u+(-\u)=\lmatrix{c} x_1 + (-x_1) \\ x_2 + (-x_2) \\ \vdots x_n+(-x_n) \rmatrix = \vec{\bf 0}.\) By commutativity we also know \(-\u+\u=\vec{\bf 0}.\) Hence \(-\u\) is the additive inverse of \(\u.\)
  5. (Scalar-vector multiplication identity): We know \(1\cdot x_i=x_i\) since \(1\) is the multiplicative identity over \(\R\text{,}\) thus \(1\cdot \u = \u\) as desired.
  6. (Associativity of scalar multiplication): Let \(a,b \in \R\) then since multiplication is associative in \(\R\) (ABK Item 4) we have \(a(bx_i)=(ab)x_i\) which means that \(a(b\u)=(ab)\u.\)
  7. (Scalar multiplication distributes over vector addition): Since multiplication distributes over addition in \(\R\) (ABK Item 5), we have \(r(x_i+y_i)=rx_i+ry_i\) which implies that \(r(\u+\v)=r\u+r\v.\)
  8. (Scalar addition distributes over scalar-vector multiplication): Let \(a,b \in \R\) then \((a+b)x_i=ax_i+bx_i\) (ABK Item 5) so it follows that \((a+b)\u=a\u+b\u.\)

Example 5.1.9. Vector space examples, without proof.

The following are some examples of vector spaces over \(\R.\)
  1. For each \(n\in\N,\) the Euclidean space \(\R^n\) with \(n\in\N,\) where vector addition and scalar multiplication are defined as usual, is a vector space over \(\R.\)
  2. The set \(\R^{m \times n}\) of \(m\times n\) matrices, under the operations of matrix addition and scalar multiplication, is a vector space over \(\R.\) This includes the case of \(\R^n\) above.
  3. Let \(S\subseteq\R.\) The set \(\mathcal{F}(S)\) of all (real-valued) functions defined on \(S,\) where \(\forall f,g,\in\mathcal{F}(S)\) and \(\forall r\in\R\) vector addition and scalar multiplication are defined as \((f+g)(x):=f(x)+g(x)\) and \((r\cdot f)(x):=r\cdot f(x),\) is a vector space over \(\R.\)
  4. The set \(\mathcal{P}_n:=\{f(x)= a_0 + a_1x+a_2x^2 + \cdots+ a_nx^n\mid a_i \in \R\}\) of polynomials of degree less than or equal to \(n\) in with coefficients in \(\R\) under polynomial addition and scalar multiplication, is a vector space over \(\R.\)

Remark 5.1.10.

Note also that in Example 5.1.9, \(\,\mathcal{P}_n\subset\F(\R);\) since \(\mathcal{P}\) is CLC it turns out that \(\mathcal{P}\) is not just a subset of \(\F(\R),\) it is a subspace of \(\F(\R).\) Read on.

Definition 5.1.11. Subspaces.

Let \(V\) be a vector space. A subspace \(S\) of \(V\) is a nonempty subset of the set of elements of \(V\) which is itself a vector space under the same operations \(+\) and \(\cdot\) of \(V.\)

Remark 5.1.13.

Theorem 5.1.7 provides a way to demonstrate non-emptiness for a potential subspace, with an additional benefit. Let \(V\) be a vector space and \(S\subset V.\) If we show \(\vec{0}\in S\) then we will have shown that \(S\) is nonempty, making progress toward potentially showing that \(S\) is a subspace of \(V,\) and if we show \(\vec{0}\not\in S\) then we have shown that \(S\) is not a vector space and hence by Definition 5.1.11 \(\,S\) is not a subspace of \(V.\)

Remark 5.1.14. The negation of CLC.

The negation of CLC occurs when there exist elements \(\u,\v\in S\) and coefficients \(a_0,b_0\in\R\) which yield a linear combination which is not in the set \(S\text{:}\)
\begin{equation} \exists\,\,\u,\v\in S,\,a,b\in\R\text{ such that }a\u+b\v\not\in S.\tag{5.1.1} \end{equation}
By Theorem 5.1.12 CLC fails for any nonempty subset of a vector space which is not a subspace. The existential quantification of CLC failure means that a counterexample is enough to show that a nonempty set \(S\) is not a subspace of a vector space \(V.\) Though we may be able to offer a general counterexample we typically show the negation of CLC in one of two simpler ways:
\begin{equation} \exists\,\,\u\in S,\,a\in\R\text{ such that }a\u\not\in S.\tag{5.1.2} \end{equation}
or
\begin{equation} \exists\,\,\u,\v\in S\text{ such that }\u+\v\not\in S.\tag{5.1.3} \end{equation}
As an aside, showing that \(\vec{0}\not\in S\) actually also shows the negation of CLC as follows. If \(\vec{0}\not\in S\) and if any nonzero \(\u\in S,\,\) (5.1.2) holds with \(a=0.\)

Remark 5.1.15. Determining whether \(S\subset V\) is a subspace.

When we are asked to determine whether a given subset \(S\) of some vector space \(V\) is a subspace of \(V,\) we are presented with a common problem in mathematics - “prove or disprove.” In each such problem, we are asked to prove whichever statement is true: 1) “\(S\) is a subspace of \(V\)” or 2) “\(S\) is not a subspace of \(V\)” without knowing which one is true. 
In such a circumstance we are forced to take an educated guess as to which statement is true, and try to prove it. Of course guessing wrong leads to wasted effort so it is a good idea to guess right, and if not to have a good feel for when to give up and try to prove the other statement. This comes with a bit of practice and experience.
The following algorithm offers a standardized approach to subspace problems.
  1. Check whether \(\vec{0}\in S.\)
    1. If \(\vec{0}\not\in S,\) cite Theorem 5.1.7 to conclude that \(S\) is not a subspace of \(V.\)
    2. If \(\vec{0}\in S,\) conclude that \(S\) is nonempty and proceed to the CLC check.
  2. Do the CLC check after forming an opinion about whether or not you believe \(S\) to be CLC.
    1. If you believe that \(S\) is not CLC, try to show that CLC fails by doing one of the following:
      1. Find a vector \(\u\in S\) and a scalar \(a\) for which \(a\u\not\in S\) and prove it by showing that \(a\u\) does not satisfy the elementhood test for \(S,\) or
      2. Find two vectors \(\u,\v\in S\) for which \(\u+\v\not\in S\) and prove it by showing that \(\u+\v\) does not satisfy the elementhood test for \(S.\)
      If either approach works, cite Theorem 5.1.12 and conclude that \(S\) is not a subspace of \(V.\)
    2. If you believe that \(S\) is CLC, prove it and finish in one of two equivalent ways:
      1. Let \(\u,\v\) be arbitrary elements of \(S\) and let \(a,b\) be arbitrary scalars. Show that \(a\u+b\v\in S,\) then cite Theorem 5.1.12 to conclude that \(S\) is a subspace of \(V.\)
      2. Let \(\u,\v\) be arbitrary elements of \(S\) and let \(a\) be an arbitrary scalar. Show that 1) \(\u+\v\in S\) and 2) \(a\u\in S,\) then cite Definition 5.1.11 to conclude that \(S\) is a subspace of \(V.\)
This algorithm gets the job done, but please do not feel constrained by it. Deviation from the the above algorithm, i.e., solving the subspace problem in your own style, is encouraged. In particular, if you are quite sure that CLC fails, feel free to skip checking that \(\vec{0}\in S\) and proceed directly to proving that CLC fails.

Remark 5.1.16.

Note that sometimes it may be more straightforward to prove CLC by first demonstrating closure under vector addition and then showing closure under multiplication.

Example 5.1.17.

Show that the set \(S:=\left\{\left.s\left(\begin{array}{r}1\\-1\\2 \end{array} \right)+t\left(\begin{array}{r}3\\0\\7 \end{array} \right)\right|s,t\in\R\right\}\) is a subspace of \(\R^3.\)
Proof: It is clear by the definition of \(S\) that \(S\) is a subset of \(\R^3.\) If we let \(s=t=0,\) then we see \(\vec{0} \in S\) and so \(S\) is a nonempty subset of \(\R^3.\)
If \(\x=s_1\left(\begin{array}{r}1\\-1\\2 \end{array} \right)+t_1\left(\begin{array}{r}3\\0\\7 \end{array} \right)\) and \(\vec{y}=s_2\left(\begin{array}{r}1\\-1\\2 \end{array} \right)+t_2\left(\begin{array}{r}3\\0\\7 \end{array} \right)\) are in \(S\) and \(a,b\in\R,\) then
\begin{align*} a\x+b\vec{y}\amp =as_1\left(\begin{array}{r}1\\ -1\\ 2\end{array}\right)+at_1\left(\begin{array}{r}3\\ 0\\ 7\end{array}\right) +bs_2\left(\begin{array}{r}1\\ -1\\ 2\end{array}\right)+bt_2\left(\begin{array}{r}3\\ 0\\ 7\end{array}\right)\\ \amp =(as_1+bs_2)\left(\begin{array}{r}1\\ -1\\ 2\end{array}\right)+(at_1+bt_2)\left(\begin{array}{r}3\\ 0\\ 7\end{array}\right) \end{align*}
which is a linear combination of \(\left(\begin{array}{r}1\\-1\\2 \end{array} \right)\) and \(\left(\begin{array}{r}3\\0\\7\end{array}\right)\) and hence passes the elementhood test for \(S.\) It follows that the CLC property holds for \(S.\)
We’ve shown that \(S\) is a nonempty subset of \(\R^3\) for which CLC holds and so it follows from Theorem 5.1.12 that \(S\) is a subspace of \(\R^3.\)

Example 5.1.18.

Show that the set \(C_1((0,1))\) of differentiable functions defined on \((0,1)\) is a subspace of the set \(\F((0,1))\) of all functions defined on \((0,1).\)
Proof: By its definition, \(C_1((0,1))\) is a subset of \(\F((0,1))\) and the zero function is differentiable so \(0\) is an element of \(C_1((0,1)),\) therefore \(C_1((0,1))\) is a nonempty subset of \(\F((0,1)).\)
Suppose \(f,g\in C_1((0,1))\) are differentiable on \((0,1)\) and \(a,b\in\R\) then \(af+bg\) is differentiable on \((0,1)\) by the linearity of the derivative since \((c_1f+c_2g)'(x)=c_1f'(x)+c_2g'(x)\) on \((0,1).\) The CLC property thus holds.
Since \(C_1((0,1))\) is a nonempty subset of \(\F((0,1))\) for which CLC holds it follows by Theorem 5.1.12 that \(C_1((0,1))\) is a subspace of \(\F((0,1)).\)

Example 5.1.19.

Let \(\x\in\R^n.\) Show the set Ann\((\x):=\left\{A\in\R^{n\times n}\left|\right.A\x=\vec{0}\right\}\) (called the annhilator of \(\x\)) is a subspace of \(\R^{n\times n}.\)
  1. By its definition, Ann\((\x)\) is a subset of \(\R^{n \times n}\) and \(\boldsymbol{0_{n\times n}}\x=\vec{0},\) which shows that \(\boldsymbol{0_{n\times n}} \in\text{ Ann}(\x)\) so Ann\((\x)\) is a nonempty subset of \(\R^{n\times n}.\)
  2. If \(A,B\in\) Ann\((\x)\) and \(c_1,c_2\in\R,\) we have
    \begin{equation*} (c_1A+c_2B)\x=c_1A\x+c_2B\x=c_1\vec{0}+c_2\vec{0}=\vec{0} \end{equation*}
    which shows \(c_1A+c_2B \in\) Ann\((\x)\) which means the CLC property holds.
  3. Since Ann\((\x)\) is a nonempty subset of \(R^{n \times n}\) for which CLC holds, we conclude by Theorem 5.1.12 that Ann\((\x)\) is a subspace of \(R^{n \times n}.\)

Example 5.1.20.

The quarter-plane \(\Pos^2\) of the Euclidean plane \(\R^2\) given by \(\Pos^2:=\displaystyle{\left\{\left.\begin{pmatrix}x\\y \end{pmatrix} \,\right|\,x,y>0\,\right\}}\) is not a subspace. Prove this by showing that CLC fails.
Solution: Since \(\begin{pmatrix}1\\1 \end{pmatrix} \in\Pos^2\) and \(-1\in\R,\) but \((-1)\begin{pmatrix}1\\1 \end{pmatrix} =\begin{pmatrix}-1\\-1 \end{pmatrix} \not\in\Pos^2,\) we see that CLC fails, so by Theorem 5.1.12 \(\Pos^2\) is not a subspace of \(\R^2.\)

Example 5.1.21.

For each of the following, determine precisely why it is not a subspace of \(\R^3.\)
  1. The plane of vectors \(\displaystyle{\left\{\left.\begin{pmatrix}x\\y\\z \end{pmatrix} \,\right|\,z=1\,\right\}}.\)
    Solution: We have, for example, \(\begin{pmatrix}0\\0\\1 \end{pmatrix} ,\begin{pmatrix}0\\1\\1 \end{pmatrix}\) in the set, but the vector sum \(\begin{pmatrix}0\\1\\2 \end{pmatrix}\) does not pass the elementhood test, so it is not in the set and by Theorem 5.1.12 the set is not a subspace of \(\R^3.\)
  2. The set of vectors \(\displaystyle{\left\{\left.\begin{pmatrix}x\\y\\z \end{pmatrix} \,\right|\,xyz=0\,\right\}}.\)
    Solution: We have, for example, \(\begin{pmatrix}1\\0\\1 \end{pmatrix} ,\begin{pmatrix}0\\1\\1 \end{pmatrix}\) in the set, but the vector sum \(\begin{pmatrix}1\\1\\2 \end{pmatrix}\) does not have \(1\cdot 1\cdot 2=0\) and so does not pass the elementhood test, so it is not in the set, and by Theorem 5.1.12, the set is not a subspace of \(\R^3.\)
  3. The set of vectors \(\displaystyle{\left\{\left.\begin{pmatrix}x\\y\\z \end{pmatrix} \,\right|\,x\le y\le z\,\right\}}.\) Solution: We have, for example, \(\left(\begin{array}{r}-2\\0\\1 \end{array} \right)\) in the set since \(-2\le0\le1,\) but the vector \((-1)\left(\begin{array}{r}-2\\0\\1 \end{array} \right)=\left(\begin{array}{r}2\\0\\-1 \end{array} \right)\) does not satisfy \(x\le y\le z\) and so fails the elementhood test, so it is not in the set, and by Theorem 5.1.12 the set is not a subspace of \(\R^3.\)
The last example of a vector space we will give in this section, the nullspace of a matrix, is very important and will be revisited in great detail in Section 6.1. To characterize the nullspace we will need the following definition.

Definition 5.1.22. The homogeneous equation.

Let \(\A\in\R^{m\times n}.\) The equation \(\A\x=\vec{0}\) is called the homogeneous equation. The solution \(\x=\vec{0}\) to the homogeneous equations is called the trivial solution. We denote nontrivial solutions to \(\A\x=\vec{0}\) by \(\vec{x_h}.\)

Definition 5.1.23. The Nullspace of a Matrix.

Let \(\A\in\R^{m\times n}.\) The set \(\left\{\x\in\R^n\,|\,\A\x=\vec{0}\right\}\) of vectors satisfying the homogeneous equation for \(\A\) is called the nullspace of \(\A,\) denoted \(N(\A).\)

Proof.

The proof is an exercise.

Proof.

Subsection 5.1.2 Inner product spaces: a generalization of vector spaces

The reader will have noted that the vectors in a vector space may be multiplied by a field element called a scalar, and added to other vectors. Not sense of vector-vector multiplication has yet been defined. In this section, we define a type of multiplication between vectors, the inner product, which yields a field element. (The cross-product is another type of multiplication that can be defined on vector spaces but which is not under the umbrella of elementary linear algebra so we will not discuss it.)
Recall that orthogonality in \(\R^n\) is defined via the dot product: for nonzero \(\u,\v\in\R^n,\,\u\perp\v\) when \(\dpr{\u}{\v}=0.\) The inner product is the generalization of the dot product that allows us to define orthogonality in general vector spaces. The theory of inner product spaces is deep and wide, but we will scratch the surface only enough to handle orthogonality in general vector spaces.
Because all inner product spaces are vector spaces, all results in this text which apply to vector spaces also apply to inner product spaces.

Definition 5.1.27. Inner Product Space.

Let \(V\) be a vector space over \(\R.\) An inner product on \(V\) is a function \(\ip{\cdot}{\cdot}:V\times V\ra\R\) which satisfies the following properties: \(\fa \u,\,\v,\,w\in V,\,\fa c\in\R,\) 
  1. \(\ip{\u}{\u}\ge 0\,\,\) and \(\,\,\ip{\u}{\u}=0\quad\LRa\quad \u=\vec{0}\)
  2. \(\ip{\u+\v}{\w}=\ip{\u}{\w}+\ip{\v}{\w}\) and \(\ip{\u}{\v+\w}=\ip{\u}{\v}+\ip{\u}{\w}\)
  3. \(\ip{c\u}{\v}=c\ip{\u}{\v}\) and \(\ip{\u}{c\v}=\overline{c}\ip{\u}{\v}\)
  4. \(\displaystyle \ip{\u}{\v}=\ip{\v}{\u}\)
The space \(\left(V,\R,\ip{\cdot}{\cdot}\right)\) is called an real inner product space or an inner product space.

Definition 5.1.28. Orthogonality.

Let \(V\) be an inner product space. Two nonzero vectors \(\u,\v\in V\) are orthogonal if \(\ip{\u}{\v}=0.\)

Proof.

The proof is a homework exercise.

Proof.

The proof is a worksheet exercise.
We may generalize the notion of Euclidean norm to vector spaces as follows.

Definition 5.1.31. Norms.

Let \(\left(V,\R\right)\) be a vector space. A norm on \(\left(V,\R,\ip{\cdot}{\cdot}\right)\) is a function
\begin{equation*} \|\cdot\|:\,V\rightarrow\R \end{equation*}
for which the following properties hold: For all \(\u,\v\in V\) and for all \(c\in\R,\)
  1. \(\displaystyle \|\v\|\ge0,\)
  2. \(\|\v\|=0\) if and only if \(\v=\vec{0},\)
  3. \(\|\u+\v\|\le\|\u\|+\|\v\|,\) and
  4. \(\displaystyle \|c\v\|=|c|\|\v\|.\)

Proof.

The proof is a homework exercise.

Proof.

The proof is a worksheet exercise.

Proof.

The proof is a homework exercise.

Proof.

The proof is a homework exercise.

Proof.

The proof is a homework exercise.

Proof.

The proof is a homework exercise.
We will state and prove a number of results about vector spaces and inner product spaces in the subsequent chapters. Since vector spaces are more general than inner product spaces, we will generally assume a vector space unless the notion of inner product is required in which case we will assume an inner product space.