A set of linearly independent vectors constitutes a basis for a given linear space if and only if all the vectors belonging to the linear space can be obtained as linear combinations of the vectors belonging to the basis.
Let us start with a formal definition of basis.
Definition Let be a linear space. Let be linearly independent vectors. Then, are said to be a basis for if and only if, for any , there exist scalars , ..., such that
In other words, if any vector can be represented as a linear combination of , then these vectors are a basis for (provided that they are also linearly independent).
Example Let and be two column vectors defined as follows.These two vectors are linearly independent (see Exercise 1 in the exercise set on linear independence). We are going to prove that and are a basis for the set of all real vectors. Now, take a vector and denote its two entries by and . The vector can be written as a linear combination of and if there exist two coefficients and such thatThis can be written asTherefore, the two coefficients and need to satisfy the following system of linear equationsFrom the second equation, we obtainBy substituting it in the first equation, we getorAs a consequence,Thus, we have been able to find two coefficients that allow us to express as a linear combination of and , for any . Furthermore, and are linearly independent. As a consequence, they are a basis for .
An important fact is that the representation of a vector in terms of a basis is unique.
Proposition If are a basis for a linear space , then the representation of a vector in terms of the basis is unique, i.e., there exists one and only one set of coefficients such that
The proof is by contradiction. Suppose there were two different sets of coefficients and such thatIf we subtract the second equation from the first, we obtainSince the two sets of coefficients are different, there exist at least one such thatThus, there exists a linear combination of , with coefficients not all equal to zero, giving the zero vector as a result. But this implies that are not linearly independent, which contradicts our hypothesis ( are a basis, hence they are linearly independent).
The replacement theorem states that, under appropriate conditions, a given basis can be used to build another basis by replacing one of its vectors.
Proposition Let be a basis for a linear space . Let . If , then a new basis can be obtained by replacing one of the vectors with .
Because is a basis for and , there exist scalars , ..., such thatAt least one of the scalars must be different from zero, because otherwise we would have , in contradiction with our hypothesis that . Without loss of generality, we can assume that (if it is not, we can re-number the vectors in the basis). Now, consider the set of vectors obtained from our basis by replacing with :If this new set of vectors is linearly independent and spans , then it is a basis and the proposition is proved. First, we are going to prove linear independence. Supposefor some set of scalars . By replacing with its representation in terms of the original basis, we obtainBecause are linearly independent, this implies thatBut we know that . As a consequence, implies . By substitution in the other equations, we obtainThus, we can conclude that implies that all coefficients are equal to zero. By the very definition of linear independence, this means that are linearly independent. This concludes the first part of our proof. We now need to prove that span . In other words, we need to prove that for any , we can find coefficients such thatBecause is a basis, there are coefficients such that From previous results, we have thatand, as a consequence, Thus, we can writeThis means that the desired linear representation is achieved withAs a consequence, span . This concludes the second and last part of the proof.
By reading the proof we notice that we cannot choose arbitrarily the vector to be replaced with : only some of the vectors are suitable to be replaced; in particular, we can replace only those that have a non-zero coefficient in the unique representation
The basis extension theorem, also known as Steinitz exchange lemma, says that, given a set of vectors that span a linear space (the spanning set), and another set of linearly independent vectors (the independent set), we can form a basis for the space by picking some vectors from the spanning set and including them in the independent set.
Proposition Let be a set of linearly independent vectors belonging to a linear space . Let be a finite set of vectors that span . If the independent set is not a basis for , then we can form a basis by adjoining some elements of to the independent set.
DefineFor , if setOtherwise, define In the latter case, the set remains linearly independent because it is formed by adjoining to the linearly independent set a vector that cannot be written as a linear combination of the vectors of . At the end of this process, we have a set of linearly independent vectors that spans because any can be written as a linear combination of the vectors of , and any can be written as a linear combination of the vectors . Therefore, is a basis for .
The basis extension theorem implies that every finite-dimensional linear space has a basis. This is discussed in the lecture on the dimension of a linear space.
Another important fact, which will also be discussed in the lecture on the dimension of a linear space, is that all the bases of a space have the same number of elements.
Please cite as:
Taboga, Marco (2021). "Basis of a linear space", Lectures on matrix algebra. https://www.statlect.com/matrix-algebra/basis-of-a-linear-space.
Most of the learning materials found on this website are now available in a traditional textbook format.