In this lecture we show how matrices and vectors can be used to represent and analyze systems of linear equations.
A system of linear equations in unknowns is a set of equationswhere are the unknowns, and (for and ) and (for ) are known constants.
The unknowns are the values that we would like to find. Solving a system of linear equations means finding a set of values for such that all the equations are satisfied. Such a set is called a solution of the system.
Example Define the system It is a system of 2 equations in 2 unknowns. A solution of the system iswhich can be verified by substituting these two values into the system:
In general, a solution is not guaranteed to exist. If it exists, it is not guaranteed to be unique. Therefore, the theory of linear equations is concerned with three main aspects:
deriving conditions for the existence of solutions of a linear system;
understanding whether a solution is unique, and how multiple solutions are related to each other;
finding techniques that allow to find solutions of a linear system.
The above system of linear equations in unknowns can be represented compactly by using matrices as follows:where:
is the vector of unknowns ;
is the matrix of coefficients, whose -th element is the constant that multiplies in the -th equation of the system;
is the vector of constants .
To understand how the representation works, notice that is a vector whose -th element is equal to the inner product of the -th row of and , that is,
Therefore,
Example The system can be represented aswhere the matrix of coefficients isthe vector of unknowns isand the vector of constant terms is
By writing a system of linear equations in matrix form, we can easily provide general conditions for the existence of a solution.
Proposition The linear system has a solution if and only if belongs to the span of the columns of .
The product can be interpreted as a linear combination of the columns of , with coefficients taken from . Therefore, the problem of solving the system is tantamount to finding a vector of coefficients that allows us to write as a linear combination of the columns of . But can be written as a linear combination of the columns of if and only if it belongs to their span.
We now give a general condition for the uniqueness of the solution.
Proposition If the linear system has a solution, then the solution is unique if and only if the columns of are linearly independent
Let's first prove the if part. We have proved above that there is a solution if and only if belongs to the span of the columns of . If the columns of are linearly independent, then they form a basis for their span. Furthermore, the representation of any vector of the span as a linear combination of the basis is unique. Therefore, if the columns of are linearly independent, there is only one linear combination of them that gives as a result, that is, the solution of the system is unique. Let's now prove the only if part. We are going to prove that if the columns are not independent, then there is more than one solution. Let be a solution, that isWhen the columns of are linearly dependent, there exist a non-zero vector that satisfiesAs a consequence, there are infinite solutions because is a solution of the system for any scalar :
The following proposition about multiple solution holds.
Proposition If the linear system has a solution and the columns of are not linearly independent, then there are infinite solutions.
See the previous proof.
Below you can find some exercises with explained solutions.
Find the matrix representation of the system
The system can be represented aswhere the matrix of coefficients isthe vector of unknowns isand the vector of constant terms is
DefineWrite down the equations of the system
The two equations of the systems are
Most of the learning materials found on this website are now available in a traditional textbook format.