Linear independence is a central concept in linear algebra. Two or more vectors are said to be linearly independent if none of them can be written as a linear combination of the others. On the contrary, if at least one of them can be written as a linear combination of the others, then they are said to be linearly dependent.

In the remainder of this lecture we will give a formal definition of linear independence, we will explain its meaning and we will provide some examples.

Let us start with a formal definition of linear dependence.

Definition
Let
be a linear space. Some vectors
are said to be **linearly dependent** if and only if there exist
scalars
such
thatand
at least one of the
scalars
is different from zero.

The requirement that at least one scalar be different from zero is fundamental.

First of all, without this requirement the definition would be trivial: we could always chooseand obtain as a resultfor any set of vectors.

Secondly, if one of the coefficients of the linear combination is different from zero (suppose, without loss of generality, it is ), then we can writethat is, is a linear combination of the vectors with coefficients . This fact motivates the informal definition of linear dependence we have given in the introduction above: two or more vectors are linearly dependent if at least one of them can be written as a linear combination of the others.

The assumption is without loss of generality because we can always change the order of the vectors and assign the first position to a vector corresponding to a non-zero coefficient (by assumption there exists at least one such vector).

Example Let and be column vectors defined as follows.The linear combinationgives as a result the zero vector becauseAs a consequence, and are linearly dependent.

It is now straightforward to give a definition of linear independence.

Definition
Let
be a linear space. Some vectors
are said to be **linearly independent** if and only if they are
not linearly dependent.

It follows from this definition that, in the case of linear independence,implies

In other words, when the vectors are linearly independent, their only linear combination that gives the zero vector as a result has all coefficients equal to zero.

Example Let and be column vectors defined as follows.Consider a linear combination of these two vectors with coefficients and :This is equal toTherefore, we have thatif and only ifthat is, if and only if . As a consequence, the two vectors are linearly independent.

Below you can find some exercises with explained solutions.

Define the following vectors:Are and linearly independent?

Solution

Consider a linear combination with coefficients and :Such a linear combination gives as a result the zero vector if and only if that is, if and only if the two coefficients and solve the system of linear equationsThis system can be solved as follows. From the second equation, we obtainwhich, substituted in the first equation, givesThus, and . Therefore, the only linear combination of and giving the zero vector as a result has all coefficients equal to zero. This means that and are linearly independent.

Let , and be vectors defined as follows:Why are these vectors linearly dependent?

Solution

Notice that the vector is a scalar multiple of :orAs a consequence, a linear combination of , and , with coefficients , and , gives as a resultThus, there exists a linear combination of the three vectors such that the coefficients of the combination are not all equal to zero, but the result of the combination is equal to the zero vector. This means that the three vectors are linearly dependent.

Let be a real number. Define the following vectors:Are and linearly independent?

Solution

Take a linear combination with coefficients and :This linear combination is equal to the zero vector if and only if that is, if and only if the two coefficients and solve the system of linear equationsA solution to this system can be found as follows. We subtract the second equation from the first and obtainorBy substitution into the second equation, we getorNow, there are two possible cases. If (first case), then and, as a consequence, . Thus, in this case the only linear combination of and giving the zero vector as a result has all coefficients equal to zero. This means that and are linearly independent. If instead (second case), then any value of will satisfy the equationChoose a number different from zero and denote it by . Then, the system of linear equations will be solved by and . Thus, in this case there are infinite linear combinations with at least one coefficient different from zero that give the zero vector as a result (a different combination for each choice of ). This means that and are linearly dependent.

Please cite as:

Taboga, Marco (2021). "Linear independence", Lectures on matrix algebra. https://www.statlect.com/matrix-algebra/linear-independence.

The books

Most of the learning materials found on this website are now available in a traditional textbook format.

Featured pages

- Maximum likelihood
- Multinomial distribution
- Convergence in probability
- Normal distribution
- Chi-square distribution
- Beta function

Explore

Main sections

- Mathematical tools
- Fundamentals of probability
- Probability distributions
- Asymptotic theory
- Fundamentals of statistics
- Glossary

About

Glossary entries

- Loss function
- Null hypothesis
- Discrete random variable
- IID sequence
- Probability mass function
- Continuous random variable

Share

- To enhance your privacy,
- we removed the social buttons,
- but
**don't forget to share**.