Search for probability and statistics terms on Statlect
Index > Matrix algebra

Basis of a linear space

by , PhD

A set of linearly independent vectors constitutes a basis for a given linear space if and only if all the vectors belonging to the linear space can be obtained as linear combinations of the vectors belonging to the basis.

Table of Contents


Let us start with a formal definition of basis.

Definition Let [eq1] be n linearly independent vectors. Let $S$ be a linear space. The vectors [eq2] are said to be a basis for $S$ if and only if, for any $sin S$, there exist n scalars $lpha _{1}$, ...,$lpha _{n}$ such that[eq3]

In other words, if any vector $sin S$ can be represented as a linear combination of [eq4], then these vectors are a basis for $S$ (provided they are also linearly independent).

Example Let $x_{1}$ and $x_{2}$ be two $2	imes 1$ column vectors defined as follows.[eq5]These two vectors are linearly independent (see Exercise 1 in the exercise set on linear independence). We are going to prove that $x_{1}$ and $x_{2}$ are a basis for the set $S=U{211d} ^{2}$ of all $2	imes 1$ real vectors. Now, take a vector $sin S$ and denote its two entries by $s_{1}$ and $s_{2}$. The vector $s$ can be written as a linear combination of $x_{1}$ and $x_{2}$ if there exist two coefficients $lpha _{1}$ and $lpha _{2}$ such that[eq6]This can be written as[eq7]Therefore, the two coefficients $lpha _{1}$ and $lpha _{2}$ need to satisfy the following system of linear equations[eq8]From the second equation, we obtain[eq9]By substituting it in the first equation, we get[eq10]or[eq11]As a consequence,[eq12]Thus, we have been able to find two coefficients that allow to express $s$ as a linear combination of $x_{1}$ and $x_{2}$, for any $sin S$. Furthermore, $x_{1}$ and $x_{2}$ are linearly independent. As a consequence, they are a basis for $S$.

Uniqueness of representation in terms of a basis

An important fact is that the representation of a vector in terms of a basis is unique.

Proposition If [eq13] are a basis for a linear space $S$, then the representation of a vector $sin S$ in terms of the basis is unique, i.e., there exists one and only one set of coefficients [eq14] such that[eq15]


The proof is by contradiction. Suppose there were two different sets of coefficients [eq16] and [eq17] such that[eq18]If we subtract the second equation from the first, we obtain[eq19]Since the two sets of coefficients are different, there exist at least one $k $ such that[eq20]Thus, there exists a linear combination of [eq21], with coefficients not all equal to zero, giving the zero vector as a result. But this implies that [eq21] are not linearly independent, which contradicts our hypothesis ([eq21] are a basis, hence they are linearly independent).

Basis replacement theorem

The replacement theorem states that, under appropriate conditions, a given basis can be used to build another basis by replacing one of its vectors.

Proposition Let [eq21] be a basis for a linear space $S$. Let $sin S$. If $s
eq 0$, then a new basis can be obtained by replacing one of the vectors [eq21] with $s$.


Because [eq21] is a basis for $S$ and $sin S$, there exist $n $ scalars $lpha _{1}$, ...,$lpha _{n}$ such that[eq15]At least one of the scalars must be different from zero, because otherwise we would have $s=0$, in contradiction with our hypothesis that $s
eq 0$. Without loss of generality, we can assume that $lpha _{1}
eq 0$ (if it is not, we can re-number the vectors in the basis). Now, consider the set of vectors obtained from our basis by replacing $x_{1}$ with $s$:[eq28]If this new set of vectors is linearly independent and spans $S$, then it is a basis and the proposition is proved. First, we are going to prove linear independence. Suppose[eq29]for some set of scalars [eq30]. By replacing $s$ with its representation in terms of the original basis, we obtain[eq31]Because [eq21] are linearly independent, this implies that[eq33]But we know that $lpha _{1}
eq 0$. As a consequence, [eq34] implies $eta _{1}=0$. By substitution in the other equations, we obtain[eq35]Thus, we can conclude that [eq36]implies that all coefficients [eq30] are equal to zero. By the very definition of linear independence, this means that [eq38] are linearly independent. This concludes the first part of our proof. We now need to prove that [eq39] span $S$. In other words, we need to prove that for any $tin S$, we can find n coefficients [eq40] such that[eq41]Because [eq21] is a basis, there are coefficients [eq43] such that [eq44]From previous results, we have that[eq15]and, as a consequence, [eq46]Thus, we can write[eq47]This means that the desired linear representation [eq41]is achieved with[eq49]As a consequence, [eq39] span $S$. This concludes the second and last part of the proof.

The book

Most of the learning materials found on this website are now available in a traditional textbook format.