Some questions about answers from here.
Suppose we are given a subspace $S$ of $F^n$ spanned by $v_1,\dots,v_n$ and we are asked to find a maximal collection of linearly independent vectors in $S$. The answer of Arturo Magidin in the link gives the following alghorithm:
- Place the vectors as columns of a matrix. Call this matrix $A$.
- Use Gaussian elimination to reduce the matrix to row-echelon form, $B$.
- Identify the columns of $B$ that contain the leading $1$s (the pivots).
- The columns of $A$ that correspond to the columns identified in step 3 form a maximal linearly independent set of our original set of vectors.
So I believe $S$ is precisely the column space of $A$.
I understand that the rows with pivots in the matrix $B$ will constitute a maximal linearly independent subset of the column space of $B$. But why the columns of $A$ corresponding to the columns with pivots of $B$ will form a maximal linearly independent subset of the column space of $A$? How to see this?
I am confused because instead I find it more natural to arrange the matrix $C$ whose rows are $v_1,\dots,v_n$, then perform elementary row operations and see if any zero vectors show up. Eventually it will be clear what is a maximal linearly independent subset. And it is also clear why this works: because replacing $v_i$ with a linear combination of $v_j$s (which corresponds to elementary row operations of $C$) does not affect linear (in)dependence of $v_1,\dots, v_n$. Am I right here? In contrast, what elementary row operations of $A$ correspond to is taking sums/differences (with coefficients) of cooridnates of $v_i$s, which makes no sense to me.