Skip to main content
\(\newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{{\not{\mid}}} \newcommand{\notsubset}{\not\subset} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\gf}{\operatorname{GF}} \newcommand{\inn}{\operatorname{Inn}} \newcommand{\aut}{\operatorname{Aut}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\cis}{\operatorname{cis}} \newcommand{\chr}{\operatorname{char}} \newcommand{\Null}{\operatorname{Null}} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section20.3Linear Independence

Let \(S = \{v_1, v_2, \ldots, v_n\}\) be a set of vectors in a vector space \(V\). If there exist scalars \(\alpha_1, \alpha_2 \ldots \alpha_n \in F\) such that not all of the \(\alpha_i\)'s are zero and \begin{equation*}\alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = {\mathbf 0 },\end{equation*} then \(S\) is said to be linearly dependent. If the set \(S\) is not linearly dependent, then it is said to be linearly independent. More specifically, \(S\) is a linearly independent set if \begin{equation*}\alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_n v_n = {\mathbf 0 }\end{equation*} implies that \begin{equation*}\alpha_1 = \alpha_2 = \cdots = \alpha_n = 0\end{equation*} for any set of scalars \(\{ \alpha_1, \alpha_2 \ldots \alpha_n \}\).

The definition of linear dependence makes more sense if we consider the following proposition.

The following proposition is a consequence of the fact that any system of homogeneous linear equations with more unknowns than equations will have a nontrivial solution. We leave the details of the proof for the end-of-chapter exercises.

A set \(\{ e_1, e_2, \ldots, e_n \}\) of vectors in a vector space \(V\) is called a basis for \(V\) if \(\{ e_1, e_2, \ldots, e_n \}\) is a linearly independent set that spans \(V\).

Example20.12

The vectors \(e_1 = (1, 0, 0)\), \(e_2 = (0, 1, 0)\), and \(e_3 =(0, 0, 1)\) form a basis for \({\mathbb R}^3\). The set certainly spans \({\mathbb R}^3\), since any arbitrary vector \((x_1, x_2, x_3)\) in \({\mathbb R}^3\) can be written as \(x_1 e_1 + x_2 e_2 + x_3 e_3\). Also, none of the vectors \(e_1, e_2, e_3\) can be written as a linear combination of the other two; hence, they are linearly independent. The vectors \(e_1, e_2, e_3\) are not the only basis of \({\mathbb R}^3\): the set \(\{ (3, 2, 1), (3, 2, 0), (1, 1, 1) \}\) is also a basis for \({\mathbb R}^3\).

Example20.13

Let \({\mathbb Q}( \sqrt{2}\, ) = \{ a + b \sqrt{2} : a, b \in {\mathbb Q} \}\). The sets \(\{1, \sqrt{2}\, \}\) and \(\{1 + \sqrt{2}, 1 - \sqrt{2}\, \}\) are both bases of \({\mathbb Q}( \sqrt{2}\, )\).

From the last two examples it should be clear that a given vector space has several bases. In fact, there are an infinite number of bases for both of these examples. \emph{In general, there is no unique basis for a vector space}. However, every basis of \({\mathbb R}^3\) consists of exactly three vectors, and every basis of \({\mathbb Q}(\sqrt{2}\, )\) consists of exactly two vectors. This is a consequence of the next proposition.

If \(\{ e_1, e_2, \ldots, e_n \}\) is a basis for a vector space \(V\), then we say that the dimension of \(V\) is \(n\) and we write \(\dim V =n\). We will leave the proof of the following theorem as an exercise.