1
$\begingroup$

Some questions about answers from here.

Suppose we are given a subspace $S$ of $F^n$ spanned by $v_1,\dots,v_n$ and we are asked to find a maximal collection of linearly independent vectors in $S$. The answer of Arturo Magidin in the link gives the following alghorithm:

  1. Place the vectors as columns of a matrix. Call this matrix $A$.
  2. Use Gaussian elimination to reduce the matrix to row-echelon form, $B$.
  3. Identify the columns of $B$ that contain the leading $1$s (the pivots).
  4. The columns of $A$ that correspond to the columns identified in step 3 form a maximal linearly independent set of our original set of vectors.

So I believe $S$ is precisely the column space of $A$.

I understand that the rows with pivots in the matrix $B$ will constitute a maximal linearly independent subset of the column space of $B$. But why the columns of $A$ corresponding to the columns with pivots of $B$ will form a maximal linearly independent subset of the column space of $A$? How to see this?

I am confused because instead I find it more natural to arrange the matrix $C$ whose rows are $v_1,\dots,v_n$, then perform elementary row operations and see if any zero vectors show up. Eventually it will be clear what is a maximal linearly independent subset. And it is also clear why this works: because replacing $v_i$ with a linear combination of $v_j$s (which corresponds to elementary row operations of $C$) does not affect linear (in)dependence of $v_1,\dots, v_n$. Am I right here? In contrast, what elementary row operations of $A$ correspond to is taking sums/differences (with coefficients) of cooridnates of $v_i$s, which makes no sense to me.

  • 0
    Just transpose a matrix, then columns become rows2017-02-24

1 Answers 1

2

There is just one subtle difference: Let $S := \{ v_1, \dots, v_n\}$ be a set of vectors and denote by $\operatorname{span} S$ the generated linear subspace. Arturo's method gives you a linearly independent subset of $S$, i.e. vectors $v_{i_1}, \dots, v_{i_k} \in S$, whereas your method gives you a linear combination of these, i.e. elements of $\operatorname{span} S$ which do not have to coincide with your original vectors.

Now let us illuminate, why Arturo's method makes sense. Denote by $A$ the matrix having $v_1,\dots,v_n$ as columns, and denote by $B$ the row echelon form with pivot elements $\beta_1,\dots, \beta_r$ positioned at $(1,i_1),\dots,(r,i_r)$. We claim, that the vectors $v_{i_1},\dots, v_{i_r}$ are linearly independent. To this end, consider the matrix $A'$ consisting only of the columns $v_{i_1}, \dots,v_{i_r}$. Using the same operations as used to transform $A$ to $B$ you can transform $A'$ to a new matrix $B'$ which is still in row echelon form and thus has full rank. But this implies that $A'$ has full rank as well, since elementary row operations don't change the rank. But if $A'$ has full rank, then the vectors $v_{i_1}, \dots,v_{i_r}$ have to be linearly independent.