0
$\begingroup$

Suppose one wants to find a basis for the vector subspace of $\mathbb{R}^4$ consisting of the points for which $x+y+z+t=0=x+2y+3z+4t$.

Now this is doable in a "brute force way" by plugging in $x=-(y+z+t)$ into the second equation, then solving for $y$ and then one can pick the two basis vectors by letting the first be the solution vector evaluated at $(z=1,t=0)$ and the second at $(z=0,t=1)$.

However, it feels to me that this whole method should be streamlined by linear algebra somehow. Is there a quick way to see this with a matrix?

More generally if one had $g(x,y,...,z)=h(x,y,...,z)=\cdots=j(x,y,...,z)=0$, how can one find the basis for this subspace?

EDIT: The naive approach would seem to be to consider the matrix with 1 1 1 1 as the top row and 1 2 3 4 as the bottom and row reduce. But this gives that the subspace is two dimensional, which makes sense in this case (coincidentally), but not in general if one had, say, 11 variables. I'm stumped.

1 Answers 1

1

One (potentially) quicker way would be just to find a spanning set of vectors, ignoring linear independence. Then, keep "taking away" vectors until you are left with a further irreducible set of spanning vectors. This will be a basis for your space, as given any vector space $V$, and given a spanning set of vectors $A$, $A$ can be reduced to a set of basis vectors for $V$.

  • 0
    I don't quite understand. Could you maybe rederive my own solution with your method? Are you talking about the matrix with $(1 1 1 1)$ as the top row and $(1 2 3 4)$ as the bottom? And then row reduce?2012-12-05