In linear algebra we have been talking a lot about the three elementary row operations. What I don't understand is why we can't multiply by any column by a constant? Since a matrix is really just a grouping of column vectors, shouldn't we be able to multiply a whole column by a constant but maintain the same set of solutions for the original and resulting matrices?
Are column operations legal in matrices also?
-
9A matrix isn't really just a grouping of column vectors. – 2011-09-14
8 Answers
You might find Building generalized inverses of matrices using only row and column operations interesting. As others have mentioned, "column operations" are multiplying on the right by an elementary matrix. You can use them to push further than reduced row echelon form and, given $A \in \mathbb R^{m \times n}$, you can compute $P$ and $Q$ such that $ PAQ = \left[ \begin{array}{c|c} I_r & \mathbf{0}_{r \times (n-r)} \\ \hline \mathbf{0}_{(m-r) \times r} & \mathbf{0}_{(m-r) \times (n-r)} \\ \end{array} \right] $ where $r = \mathrm{rank}\left(A\right)$ and $P$ and $Q$ are invertible. You can get $P$ with just row operations but you need column operations to get $Q$.
Yes we can, but the question is what properties of the matrix we preserve when we do so. The row operations (which correspond to multiplication by an invertible matrix on the left) preserve the row space (the linear span of the rows) and the null space. The column operations preserve the column space (the linear span of the columns) and the null space of the transpose, but they don't preserve the row space or the null space.
Let's say we have some relation, like
$\left( \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{array} \right) \left( \begin{array}{ccc} x \\ y \\ z\end{array}\right) = \left( \begin{array}{ccc} 10 \\ 11 \\ 12\end{array} \right)$
This has a lot of information here. For instance, it says that $x + 2y + 3z = 10$. In fact, one can uniquely solve for $x, y, z$ from this relationship. But suppose we multiply the first column by 3:
$\left( \begin{array}{ccc} 3 & 2 & 3 \\ 12 & 5 & 6 \\ 21 & 8 & 9 \end{array} \right) \left( \begin{array}{ccc} x \\ y \\ z\end{array}\right) = \left( \begin{array}{ccc} 10 \\ 11 \\ 12\end{array} \right)$
Both of these cannot be correct (you can fully solve it out if you want, in fact, I encourage it). For instance, knowing that $x + 2y + 3z = 10$ and that $3x + 2y + 3z = 10$ tells us that $x = 0$. That's not so good - clearly, it is not always the case (and it's not here, either). Further, this would result in 3 non-dependent equations in 2 variables - no solution exists.
The main idea is that matrices hide linear equations, and the way in which they hide them is dependent on the laws of matrix multiplication. It's linear with respect to row operations, because that's the same as redundant operations. But it's not the same with respect to column operations. If we wanted to change rows, then we could use matrices... if we multiply on the right instead of the left. Again, it's an artifact of the way in which matrices multiply.
-
0It's the same reason you can multiply both sides of an equation by two, but you can't just multiply both terms on the left side of an equation by two. `X+Y=3 -> 2(X+Y)=3` is good -- if the one on the left is true, the one on the right is true, but `X+Y=3 -> 2X+2Y=3` is bad -- they say completely different things. – 2011-09-14
We can perform elementary column operations: if you multiply a matrix on the right by an elementary matrix, you perform an "elementary column operation".
However, elementary row operations are more useful when dealing with things like systems of linear equations, or finding inverses of matricces. And anything you want to do with columns you can do with rows... in the transpose.
You can develop the entire theory with elementary column operations instead; you just have to set up everything "the other way around". You would look at systems of linear equations as systems of the form $(b_1,b_2,\ldots,b_m) = (x_1,\ldots,x_m)A$ (that is, $\mathbf{b} = \mathbf{x}A$), with row vectors; the coefficients of the $m$th equation would become the $m$th column of the coefficient matrix $A$. And you would perform elementary column operations instead of elementary row operations.
When you perform elementary row operations with systems of the form $A\mathbf{x}=\mathbf{b}$, your operations do not respect the columnspace, but do respect the rowspace and the nullspace. If you perform elementary column operations to these systems, you respect the columnspace, but you do not respect the rowspace of the nullspace.
That said, matrices are not "really just a grouping of column vectors". They are a lot more.
Doing something similar to what was mentioned in another answer, but this time in a situation where there is a unique solution rather than infinitely many solutions, first consider the following system: $ \begin{bmatrix} 1 & 2 \\ 4 & 5 \end{bmatrix} \begin{bmatrix} x \\ y\end{bmatrix} = \begin{bmatrix} 10 \\ 11 \end{bmatrix}$ The solution is $x= -28/3$, $y= 29/3$.
If we multiply the elements in the first column by $3$, we get $ \begin{bmatrix} 1 & 2 \\ 4 & 5 \end{bmatrix} \begin{bmatrix} x \\ y\end{bmatrix} = \begin{bmatrix} 10 \\ 11 \end{bmatrix}$ and now the solution is $x=-10/3$, $y=29/3$.
So the solution did not remain the same.
Now remember a couple of things: (1) You can multiply both sides of $u=v$ by the same number, and if what you had was true, then what you get is still true. That justifies row operations, but not column operations. (2) When you do a row operation, you multiply both sides on the left by the same matrix. Suppose we want to add $-4$ times the first row to the second in the first problem above. Then we get $ \begin{bmatrix} 1 & 0 \\ -4 & 1 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 4 & 5 \end{bmatrix} \begin{bmatrix} x \\ y\end{bmatrix} = \begin{bmatrix} 1 & 0 \\ -4 & 1 \end{bmatrix}\begin{bmatrix} 10 \\ 11 \end{bmatrix}. $ We're multiplying a matrix on the left by something, and on the other side of the equation we're multiplying the SAME matrix on the left by the same thing. If we want to multiply on the right rather than on the left, we need a $1\times m$ matrix, for some $m$. Thus $ \begin{bmatrix} 1 & 2 \\ 4 & 5 \end{bmatrix} \begin{bmatrix} x \\ y\end{bmatrix} \begin{bmatrix} \bullet & \bullet & \bullet & \cdots \end{bmatrix} = \begin{bmatrix} 10 \\ 11 \end{bmatrix}\begin{bmatrix} \bullet & \bullet & \bullet & \cdots \end{bmatrix}$
This does not accomplish a column operation. That would be done like this: $ \begin{bmatrix} 1 & 2 \\ 4 & 5 \end{bmatrix} \begin{bmatrix} \bullet & \bullet \\ \bullet & \bullet \end{bmatrix}$ But you'd have to multiply by the same thing on the right side of the equation, and there's no way to do that.
One area where column and row ops can be used are when finding the determinant of a matrix. In this case, the determinant property is not affected by the choice of ops because we are not multiplying it by any specific vector.
Example: $ det\left( \begin{array}{ccc} 1 & 0 & 0 & 3 \\ 2 & 7 & 0 & 6 \\ 0 & 6 & 3 & 0 \\ 7 & 3 & 1 & -5 \end{array} \right) $
This one is a bit time consuming to solve by using cofactor expansion, but we can also use elementary row ops on this one.
You could take the time to work through it just like you were taught in Linear Algebra I or you could take another look at it. What happens if we multiply the first column by -3 and add it to the fourth column? We get:
$ det\left( \begin{array}{ccc} 1 & 0 & 0 & 3-3 \\ 2 & 7 & 0 & 6-6 \\ 0 & 6 & 3 & 0 \\ 7 & 3 & 1 & -5-21 \end{array} \right) = det\left( \begin{array}{ccc} 1 & 0 & 0 & 0 \\ 2 & 7 & 0 & 0 \\ 0 & 6 & 3 & 0 \\ 7 & 3 & 1 & -26 \end{array} \right) = 1*7*3*-26 = -546 $
Bingo! An upper triangular matrix. No correction is needed for this operation so the answer is the product of the diagonal entries: -546;
So to sum up, there are some times when you can use both column operations and row operations. It works as long as you don't need to preserve a multiplication relationship. If you start looking at both column and row operations, it will open up your mind to the power of the matrix. Just be VERY CAREFUL.
-- techdude
i think that the reason which causes "elementary" column operations incorrect is because of our rules toward the form of linear equations. That is, we define linear equations to be written as what we get used to writing now! You can try to write down theses equations straight and apply column operations to them, and see what happens.