Lecture 6
Recall
-
1.
Solving a system of equations: Given a matrix and vector , we wish to find the vector .
-
2.
The augmented matrix
-
3.
Terms associated with the augmented matrix that help with finding the solutions to a system: rank, pivots, row echelon form, free variables
-
4.
The connection between the linear span of a set of vectors and solving a system of equations
Linear Independence of a Set of Vectors
Linear independence comes from a geometric way of understanding parallel vectors. Given two vectors in the plane that are not parallel, any point on the plane can be reached by moving in the directions of the two vectors. In contrast, given two parallel vectors, many points cannot be reached through such movements. Only points along the line that the vectors lie in can be reached.
It is useful to keep this geometric interpretation of parallel vectors in mind, but we wish to generalize this concept and express it in algebraic terms.
Algebraic Representation of Parallelism
Here is a first attempt at defining parallelism in algebraic terms:
Algebraically, we say that and are parallel if for some scalar .
This is equivalent to saying that
or in other words, two vectors are parallel if a linear combination of and can produce the zero vector.
What about higher dimensions, say ? If we want to expand this definition of parallelism, we may wish to express algebraically that lie in the same plane. If we consider this though, it is apparent that any two vectors lie in the same plane in . If instead we consider the case of three vectors in , this is no longer the case β for instance, the three vectors that correspond to the three axes do not lie in the same plane. It is of interest then to characterize when exactly three vectors will lie in the same plane.
Question:
When will lie in the same plane in ?
We can think of this in the following way: if the three vectors lie in the same plane, then a linear combination of and should be able to produce :
Again, we can rewrite this equation and find a familiar expression:
that is, can form a linear combination to produce the zero vector.
Coplanarity
The idea of coplanarity can be thought of as the capacity for a linear combination of vectors to produce the zero vector.
Otherwise, is linearly independent.
There are some important points to understand with this definition. First, when we say a linear combination that is equal to zero, we mean
One solution to this equation is obvious: letting for all . This is called the trivial solution.
A non-trivial linear combination resulting in the zero vector is one in which at least one of the coefficients is nonzero.
Example.
Consider the set of vectors in the space :
Question:
Is (linearly) independent or dependent?
This question boils down to: can a non-trivial linear combination equal ?
If we look at this equation, it should remind us of the column picture. That is, we are looking for the solutions of the system
Thinking about it, it may be enlightening to consider what exactly we are looking for. We now have the tools to solve the system, but are we simply looking for a solution, or something more, or something less than a particular solution?
Observe that such a system will always be consistent. The reason for this is that the trivial solution will always be a solution. A system becomes inconsistent only when a pivot enters the constant column, and since the entries in the constant column are all zero, none of the row operations we know can ever produce a non-zero entry in that column. Thus this system will always be consistent.
The question of importance then is: does this system have a non-zero solution? This equates to asking the question of whether this system has more than one solution or not. If you recall, we have found that linear systems may have either no solution, one solution, or infinitely many solutions. Since this system has at least one solution, we want to know whether it in fact has infinitely many solutions. When attempting to answer this question, what should come to mind is the rank of a matrix, or the presence of free variables. If there are free variables, then the system will have infinitely many solutions.
Going back to the example at hand, we put the matrix in row echelon form:
This matrix has a rank of , and thus has no free variables. Thus it has a unique solution, which is the zero or trivial solution. Because of this, it has no non-trivial linear combination that produces the zero vector. Thus the set of vectors is independent. Here we have used the tools we have developed to test the linear dependence of a set of vectors.
An important thing to note is that we are looking at a specific type of linear system. The right-hand side column vector in the case of determining linear independence is the zero vector:
This type of system is referred to as a homogeneous system:
Such a system is associated with linear independence, whereas a system such as is associated with linear span.
Example.
Let us consider another example. In the space , consider , where
To check for linear dependence, we again look for solutions of the equation
We proceed by setting up the augmented matrix. One interesting thing to note is that, at least in the question of linear dependence, the systems that we set up will always have a zero right-hand side vector β we may as well drop this column out. We eventually will do this, but for now let us proceed normally,
In row echelon form, we see that the matrix has only two pivots. Then the third column represents a free variable, and so there are infinitely many solutions. Thus is a linearly dependent set.
Since is a linearly dependent set, there must exist a non-zero linear combination that produces the zero vector. An important question to ask now is
Question:
What is the dependency relation? Find .
This tells us that
Meaning the linear combination of
Question:
What can we say about linear dependence of a set in in general? What if the set contains one, two, three, four, or more vectors? Consider if with
Note that there are columns in the coefficient matrix, and that since there are only two rows. Thus there must be at least one free variable. Thus there will be infinitely many solutions, and so the set if linearly dependent. Thus, through row operations, the matrix will eventually be reduced to a form such as
It seems then that given more than two vectors in , they will always be dependent. Given two or fewer vectors, they may remain independent.
Theorem 1.
Let in . If , then will be a dependent set of vectors.
Proof.
(Sketch)
The number of columns of the coefficient matrix is , and the number of rows is . Given , with , then there are at least -many free variables with . Then the system will have infinitely many solutions, and thus be dependent.
β