Lecture 7
Recall
-
1.
A set of vectors in is linearly independent if the following homogenous equation has only the zero (trivial) solution:
(1) Note that (1) always has a solution, namely . Our interest is whether this is the only solution, or if there are others.
It is a common mistake when we first learn this to think that something like “since we can form a linear combination with , thus the set is dependent.”. This is not the case; it must be a solution with at least one coefficient non-zero.
-
2.
If the number of vectors in is greater than the dimension of the space, then is dependent.
Suppose is a dependent set. Then there is at least one . Without loss of generality, say . Then
The righthand side of this equation should remind us of the span. In this case, of . That is, lies in the span of the other ’s.
Linear Independence
Question:
Consider the set of vectors . Is independent or dependent? To answer this, we must ask whether the following homogeneous equation has a non-zero solution:
Consider for instance the solution , . This is a non-zero or non-trivial solution, hence must be dependent.
Question:
Let . Suppose none of the vectors in are zero vectors, and each is perpendicular to one another. That is,
Is independent or dependent? Before we answer this question, we should observe some noteworthy aspects of this question and how it is posed. If we can find some useful relationships between orthogonality and linear dependence, we will perhaps uncover illuminating geometric relationships.
To answer the question at hand, we can start with the equation we already know: the linear combination of vectors that equals the zero vector:
it is initially unclear how the orthogonality will come into play, but consider taking the dot product of both sides of the equation with :
Distributive property of dot product | ||||
The vectors are orthogonal to each other | ||||
Since , we know that . Thus from the above, .
Likewise, we find that by taking the dot product with and .
Therefore, the only solution is the zero solution. Thus must be independent.
Note: In an -dimensional space we cannot have more than (non-zero) orthogonal vectors.
Matrices
We have already seen some examples of matrices as means of storing systems of linear equations. They are simply arrays of numbers laid out in a way in which each entry has a convenient address.
The entry is the entry found in row and column . We always give the address of the entry row first and column second
Every matrix is described by its entries, and we sometimes write it like below:
For our purposes, the entries will almost always be from and , but keep in mind that the entries could be any object.
It is also important to note the size of a matrix, and we often do this by saying that is an matrix, or writing .
Algebra of Matrices
Suppose and are matrices of the same size. We define several operations on matrices as follows:
Operation | Definition by Entry | Resultant Size | |
---|---|---|---|
Matrix addition | |||
Matrix subtraction | |||
Scalar multiplication |
Example.
Then is given by
Some may wonder what the result of an addition such as would be. Since the dimensions of these matrices do not match, such an addition is not defined. It is common to wonder whether we could just add a column of zeros to the matrix in order to make the operation defined. The response to this is that of course we could do that, but it would completely change the matrix , so we do not allow this type of operation.
These are the simplest types of operations we can define on matrices. Two elementary operations remain: multiplication and division. We might also wonder about operations such as logarithm and exponentiation. We will get to those types of operations later.
Matrix Multiplication
Matrix multiplication at first seems very strange. If we consider trying to define a multiplication on two matrices, we might conclude from seeing addition and subtraction that to multiply matrices they should be the same size. In fact this is not the case.
Before we discuss the full definition of matrix multiplication, consider the dot product:
If the vectors had an unequal number of components then the dot product would not be defined. If we consider the scalar to be the sole entry in a matrix of size , then this tells us one possible set of sizes for which matrix multiplication is defined:
The dimensions of matrices that can be multiplied can be summarized as
Example.
Consider the product
We can evaluate this by ignoring one row at a time of :
It may be startling to observe the outcome of trying to multiply by :
Because the dimensions of the two matrices do not correctly line up in this order, they cannot be multiplied in this way. The technical term for this is that, in general, matrix multiplication is non-commutative. That is, . In fact, both need not be defined at all, depending on their sizes.
Another Perspective on Matrix Multiplication
Consider again the matrix multiplication from the previous example. We can view this multiplication in a number of ways:
The three ways of viewing matrix multiplication are
-
1.
-
2.
row of is a linear combination of rows of with coefficients from the row of .
-
3.
column of is a linear combination of columns of with coefficients from the column of .
Example.
We will try multiplication with each of the three ways of viewing matrix multiplication.
Note that is not defined, but is defined. Let , the matrix whose entries we will try to find.
-
1.
First method:
Similarly,
So
-
2.
Next, we will use the second method. This says that rows of are linear combinations of rows of .
Row 1 The coefficients and in front of the rows of are taken from the first row of .
-
3.
The next method states that columns of are linear combinations of columns of .
Column 2 This allows us to fill in another entry of :