Lecture 6

A. Agarwal
February 8, 2012

Recall

  1. 1.

    Solving a system of equations: 𝐀⁒xβ†’=bβ†’ Given a matrix 𝐀 and vector bβ†’, we wish to find the vector xβ†’.

  2. 2.

    The augmented matrix

    [𝐀bβ†’]

  3. 3.

    Terms associated with the augmented matrix that help with finding the solutions to a system: rank, pivots, row echelon form, free variables

  4. 4.

    The connection between the linear span of a set of vectors and solving a system of equations

Linear Independence of a Set of Vectors

Linear independence comes from a geometric way of understanding parallel vectors. Given two vectors in the plane that are not parallel, any point on the plane can be reached by moving in the directions of the two vectors. In contrast, given two parallel vectors, many points cannot be reached through such movements. Only points along the line that the vectors lie in can be reached.

v1β†’

v2β†’

(3,2) P

FigureΒ 1: Non-parallel vectors. The point P can be reached from any points through movements along the two vectors v1β†’ and v2β†’.

v1β†’

v2β†’

(0,0) S

(3,2) Q

FigureΒ 2: Parallel vectors. The point Q cannot be reached from the point S through movements along the two vectors v1β†’ and v2β†’.

It is useful to keep this geometric interpretation of parallel vectors in mind, but we wish to generalize this concept and express it in algebraic terms.

Algebraic Representation of Parallelism

Here is a first attempt at defining parallelism in algebraic terms:

Algebraically, we say that v1β†’ and v2β†’ are parallel if v1β†’=c⁒v2β†’ for some scalar c.

This is equivalent to saying that

v1β†’ =c⁒v2β†’
(1)⁒v1β†’+(-c)⁒v2β†’ =0β†’

or in other words, two vectors are parallel if a linear combination of v1β†’ and v2β†’ can produce the zero vector.

What about higher dimensions, say ℝ3? If we want to expand this definition of parallelism, we may wish to express algebraically that v1β†’,v2β†’ lie in the same plane. If we consider this though, it is apparent that any two vectors lie in the same plane in ℝ3. If instead we consider the case of three vectors in ℝ3, this is no longer the case – for instance, the three vectors that correspond to the three axes x,y,z do not lie in the same plane. It is of interest then to characterize when exactly three vectors will lie in the same plane.

Question:

When will v1β†’,v2β†’,v3β†’ lie in the same plane in ℝ3?

We can think of this in the following way: if the three vectors lie in the same plane, then a linear combination of v1β†’ and v2β†’ should be able to produce v3β†’:

v3β†’ =c1⁒v1β†’+c2⁒v2β†’

Again, we can rewrite this equation and find a familiar expression:

v3β†’+(-c1)⁒v1β†’+(-c2)⁒v2β†’

that is, v1β†’,v2β†’,v3β†’ can form a linear combination to produce the zero vector.

Coplanarity

The idea of coplanarity can be thought of as the capacity for a linear combination of vectors to produce the zero vector.

Let S be a set of vectors,
S={v1β†’,v2β†’,…,vkβ†’}
S is a linearly dependent set if a non-trivial linear combination of vectors can produce the zero vector.

Otherwise, S is linearly independent.

There are some important points to understand with this definition. First, when we say a linear combination that is equal to zero, we mean

c1⁒v1β†’+c2⁒v2β†’+…+ck⁒vkβ†’ =0β†’

One solution to this equation is obvious: letting ci=0 for all i=1,2,…,k. This is called the trivial solution.

A non-trivial linear combination resulting in the zero vector is one in which at least one of the coefficients ci is nonzero.

Example.

Consider the set of vectors S in the space ℝ2:

S={[12],[-34]}

Question:

Is S (linearly) independent or dependent?

c1⁒[12]+c2⁒[-34] =[00]

This question boils down to: can a non-trivial linear combination equal 0β†’?

If we look at this equation, it should remind us of the column picture. That is, we are looking for the solutions of the system

[1-30240]

Thinking about it, it may be enlightening to consider what exactly we are looking for. We now have the tools to solve the system, but are we simply looking for a solution, or something more, or something less than a particular solution?

Observe that such a system will always be consistent. The reason for this is that the trivial solution c1=c2=0 will always be a solution. A system becomes inconsistent only when a pivot enters the constant column, and since the entries in the constant column are all zero, none of the row operations we know can ever produce a non-zero entry in that column. Thus this system will always be consistent.

The question of importance then is: does this system have a non-zero solution? This equates to asking the question of whether this system has more than one solution or not. If you recall, we have found that linear systems may have either no solution, one solution, or infinitely many solutions. Since this system has at least one solution, we want to know whether it in fact has infinitely many solutions. When attempting to answer this question, what should come to mind is the rank of a matrix, or the presence of free variables. If there are free variables, then the system will have infinitely many solutions.

Going back to the example at hand, we put the matrix in row echelon form:

[1-30240]β†’-2⁒R1+R2[1-300100]

This matrix has a rank of 2, and thus has no free variables. Thus it has a unique solution, which is the zero or trivial solution. Because of this, it has no non-trivial linear combination that produces the zero vector. Thus the set of vectors is independent. Here we have used the tools we have developed to test the linear dependence of a set of vectors.

An important thing to note is that we are looking at a specific type of linear system. The right-hand side column vector in the case of determining linear independence is the zero vector:

c1⁒[12]+c2⁒[-34] =[00]⇐Right hand side is zero vector

This type of system is referred to as a homogeneous system:

𝐀⁒xβ†’ =0β†’

Such a system is associated with linear independence, whereas a system such as 𝐀⁒xβ†’=bβ†’ is associated with linear span.

Example.

Let us consider another example. In the space ℝ3, consider S={v1β†’,v2β†’,v3β†’}, where

v1β†’=[101]v2β†’=[132]v3β†’=[334]

To check for linear dependence, we again look for solutions of the equation

c1⁒v1β†’+c2⁒v2β†’+c3⁒v3β†’ =0β†’

We proceed by setting up the augmented matrix. One interesting thing to note is that, at least in the question of linear dependence, the systems that we set up will always have a zero right-hand side vector – we may as well drop this column out. We eventually will do this, but for now let us proceed normally,

[113003301240]β†’-R1+R3[113003300110]β†’13⁒R2[113001100110]β†’-R2+R3[113001100000]

In row echelon form, we see that the matrix has only two pivots. Then the third column represents a free variable, and so there are infinitely many solutions. Thus S is a linearly dependent set.

Since S is a linearly dependent set, there must exist a non-zero linear combination that produces the zero vector. An important question to ask now is

Question:

What is the dependency relation? Find c1,c2,c3.

c1+c2+3⁒c3=0c2+c3=0}  c3β†’free variable

This tells us that

[c1c2c3]=[-2⁒c3-c3c3]=c3⁒[-2-11]

Meaning the linear combination of v1β†’,v2β†’,v3β†’

-2⁒v1β†’-v2β†’+v3β†’ =0β†’

Question:

What can we say about linear dependence of a set in ℝ2 in general? What if the set contains one, two, three, four, or more vectors? Consider if S={v1β†’,v2β†’,v3β†’} with

v1β†’ =[ab] v2β†’ =[pq] v3β†’ =[st]
[aps0bqt0]

Note that there are 3 columns in the coefficient matrix, and that Rank⁒(A)≀2 since there are only two rows. Thus there must be at least one free variable. Thus there will be infinitely many solutions, and so the set if linearly dependent. Thus, through row operations, the matrix will eventually be reduced to a form such as

[aps0bqt0]⟢[β– **00β– *0]⁒ or ⁒[β– **00*β– 0]

It seems then that given more than two vectors in ℝ2, they will always be dependent. Given two or fewer vectors, they may remain independent.

Theorem 1.

Let S={v1β†’,v2β†’,…,vkβ†’} in ℝn. If k>n, then S will be a dependent set of vectors.

Proof.

(Sketch)

[↑↑↑0v1β†’v2β†’β‹―vkβ†’0↓↓↓0]

The number of columns of the coefficient matrix is k, and the number of rows is n. Given n<k, with Rank⁒(A)≀n, then there are at least n-k-many free variables with n-k>0. Then the system will have infinitely many solutions, and thus be dependent.

∎