5
$\begingroup$

Suppose I wish to show that for a finite dimensional vector space, $V$, with basis $B=\{b_1,...,b_n\}$ and a given subspace $X$ of $V$, there exists a subset of $B$ that generates a subspace $Y$, such that $V=X\bigoplus Y$. Would the following argument make sense?

Suppose $X$ has basis $B_X$ which elements are expressed in terms of $B$. List them, together with the elements of $B$, in a matrix and perform Gaussian elimination preserving the $B_X$ rows. then the remaining (non-zero) vectors in the matrix, $M$, will be a basis for $V$. Let ($M$ \ $B_X$) be the basis of $Y$. Then, $V=X+Y$.

Added: The matrix $M$ should only have $n$ non-zero rows. What it meant by "perform[ing] Gaussian elimination" is just to list all the basis vectors in $B$ and those for $X$. Then there are linearly dependent vectors. So we use Gaussian elimination to kill those off. But we choose to kill off the basis vectors in the original $B$, leaving those of $X$ in our matrix $M$. So that $M$ contains the basis vectors of $X$ but has only $n$ entries/L.I, vectors.

$X\cap Y=\{0\}$ because the the bases for $X$ and $Y$ are linearly independent.

Therefore, the statement is true.

Comment: Firstly, I am not entirely sure that my argument is necessarily valid. Secondly, it seems to lack rigor.

Thanks in advance for any help!

  • 0
    I don't understand what you are doing when you say you are "perform[ing] Gaussian elimination preserving the $B_X$ rows". Also, your matrix M seems to have $n+\dim(X)$ rows, the way you describe it. Is that what you want?2011-09-12
  • 0
    Sorry about the ambiguities. The matrix $M$ should only have $n$ non-zero rows. What I meant by "perform[ing] Gaussian elimination" is just to list all the basis vectors in $B$ and those for $X$. Then there are linearly dependent vectors. So we just Gaussian elimination to kill those off. But we choose to kill off the basis vectors in the original $B$, leaving those of $X$ in our matrix $M$. So that $M$ contains the basis vectors of $X$ but has only $n$ entries/L.I, vectors. :-)2011-09-12
  • 0
    If you "list all the basis vectors in $B$ and those in $X$" then you get $n+\dim(X)$ rows, not $n$ rows, and all of the rows are nonzero. You are either not saying what you mean, or what you mean is wrong.2011-09-12
  • 0
    I am not sure about your argument but take $V=\mathbb{R}^3$ and suppose $X$ is the $x-y$ plane spanned by the first two columns of the identity matrix. Then the remaining column of the identity matrix does not span the complement of $X$ in $\mathbb{R}^3$ but only the $z$-axis. Does this provide a counter-example or am I misreading the direct-sum argument?2011-09-12
  • 0
    @Arturo: But after the Gaussian elimination, there will only be $n$ non-zero rows, no? $M$ is the final matrix obtained after the Gaussian elimination.2011-09-12
  • 0
    @percusse: The direct sum means you take an element from X and one from Y and add them together. Since X is the x-y plane and if Y is the z-axis, then $X\bigoplus Y = \mathbb R$. :-)2011-09-12
  • 0
    @percusse: You can take $x$ and $z$ (or $y$ and $z$), which span a complement to $\langle x-y\rangle$.2011-09-12
  • 0
    @johann: Ah, okay; that's what you are trying to do...2011-09-12
  • 0
    Yep, I misread that. But I confused myself since you wrote that it is a finite ["dimensional"] vector space which kind of led me to dark matters! :)2011-09-12
  • 0
    @johann: In the end you should have $n$ non-zero rows. Some of those are supposed to form a basis of $X$, and the remaining ones should have a single 1 and rest zeros, so that they would be elements of the original basis $B$. I'm not sure exactly how you do that Gaussian elimination step, so it is difficult to judge, whether you do it right. I would first put the $B_X$ into row echelon form $R$, and then pick the members of the original basis that have their single non-zero coordinate at positions other than the leading ones of $R$. May be that's what you did, but you lost me at some point.2011-09-12
  • 0
    @johann: I just approved an edit replacing "finite vector space" by "finite dimensional vector space" -- just checking that this is what you intended (and not a vector space over a finite field).2011-09-12
  • 0
    @joriki: Yup, you're right. :-)2011-09-12
  • 0
    @Jyrki: Thanks :-) I know my writing is not very clear. I am just wondering if there are maybe better ways of proving this...?2011-09-12
  • 0
    @johann: I was assuming all the time that the vectors are written using their coordinates w.r.t. to the given basis $B$. That's why the vectors of $B$ have a single 1 and n-1 zeros. Feel free to expand the use of that $R$ matrix from my earlier comment to an answer! You will be needing the determinant test (or a corresponding observation) for linear independence.2011-09-12
  • 0
    @Jyrki: Yes, you are right, that is what I was thinking of too. :)2011-09-12

1 Answers 1

1

Here's a useful lemma:

Lemma. Let $\mathbf{V}$ be a vector space, and let $\mathbf{x}_1,\ldots,\mathbf{x}_m$ be an ordered list of vectors. Then the list is linearly dependent if and only if there is a $k$, $1\leq k\leq m$, such that $\mathbf{x}_k$ is a linear combination of $\mathbf{x}_1,\ldots,\mathbf{x}_{k-1}$.

Proof. If there is a $k$ such that $\mathbf{x}_k$ is a linear combination of $\mathbf{x}_1,\ldots,\mathbf{x}_{k-1}$, then the list is linearly dependent.

Conversely, suppose the list is linearly dependent. If $\mathbf{x}_1=\mathbf{0}$, then $\mathbf{x}_1$ is a linear combination of the previous vectors (since the empty sum equals $\mathbf{0})$. If $\mathbf{x}_1\neq\mathbf{0}$, then $\{\mathbf{x}_1\}$ is linearly independent. Now, since the full list is linearly dependent, there is a smallest $k$, $2\leq k\leq m$, such that $\{\mathbf{x}_1,\ldots,\mathbf{x}_{k-1}\}$ isl inearly independent, but $\{\mathbf{x}_1,\ldots,\mathbf{x}_k\}$ is linearly dependent. That means that $\mathbf{x}_k$ is a linear combination of $\mathbf{x}_1,\ldots,\mathbf{x}_{k-1}$, as desired. $\Box$

Now, let $\beta=\{\mathbf{x}_1,\ldots,\mathbf{x}_m\}$ be a basis for $X$. Define $\gamma_0=\emptyset$.

If $b_1\in\mathrm{span}(\beta)$, then let $\gamma_1=\gamma_0$ (that is, discard $b_1$). If $b_1\notin\mathrm{span}(\beta)$, then let $\gamma_1=\{b_1\}$.

If $b_2\in\mathrm{span}(\beta\cup\gamma_1)$, then let $\gamma_2=\gamma_1$ (discard $b_2$). If $b_2\notin\mathrm{span}(\beta\cup\gamma_1)$, then let $\gamma_2=\gamma_1\cup\{b_2\}$.

Continue this way: assuming you have dealt with $b_k$, if $b_{k+1}\in\mathrm{span}(\beta\cup\gamma_k)$, then let $\gamma_{k+1}=\gamma_k$. If $b_{k+1}\notin\mathrm{span}(\beta\cup\gamma_{k+1})$, then let $\gamma_{k+1}=\gamma_k\cup\{b_{k+1}\}$.

(What this amounts to is taking the list $\mathbf{x}_1,\ldots,\mathbf{x}_m,b_1,\ldots,b_n$, and then throwing away any vector which is a linear combination of the previous vectors.).

Now, at the end, $\beta\cup\gamma_n$ will be a basis for all of $V$, because it has the same span as $\beta\cup\{b_1,\ldots,b_n\}$. Moreover, because it is a basis, $\mathrm{span}(\beta)\cap\mathrm{span}(\gamma_n) = \{\mathbf{0}\}$. So $\mathrm{span}(\gamma_n)$ gives what you want.

Your idea is okay, but the way you describe leaves something to be desired: when you do row-reduction, you have no way to guarantee that the rows under the first $m$ that you have will still correspond to vectors $b_i$; those nonzero rows may be some other linear combinations, which makes it hard to argue that you can find a subset of the $b_i$ that will work. It can be made to work, but I would do it by placing the $\mathbf{x}_i$ and the $b_i$ as columns of a matrix, and then row reduce, making sure that the first $m$ columns contain pivots. Then look for the $b_i$ that correspond to columns with pivots in the final, row-reduced form of this matrix.