17
$\begingroup$

Let $V$ be a finite-dimensional vector space, $V_i$ is a proper subspace of $V$ for every $1\leq i\leq m$ for some integer $m$. In my linear algebra text, I've seen a result that $V$ can never be covered by $\{V_i\}$, but I don't know how to prove it correctly. I've written down my false proof below:

First we may prove the result when $V_i$ is a codimension-1 subspace. Since $codim(V_i)=1$, we can pick a vector $e_i\in V$ s.t. $V_i\oplus\mathcal{L}(e_i)=V$, where $\mathcal{L}(v)$ is the linear subspace span by $v$. Then we choose $e=e_1+\cdots+e_m$, I want to show that none of $V_i$ contains $e$ but I failed.

Could you tell me a simple and corrected proof to this result? Ideas of proof are also welcome~

Remark: As @Jim Conant mentioned that this is possible for finite field, I assume the base field of $V$ to be a number field.

  • 0
    See [this previous question](http://math.stackexchange.com/q/10760/742).2012-05-16

6 Answers 6

28

[Edit] This answer is contained in another answer of mine. Sorry about that. Switching to CW[/Edit]

Pick a basis and a system of coordinates $x_1,\ldots,x_n$ for $V$. WLOG assume that $n \geq 2$. As you observed, without loss of generality we can assume that the subspaces are all of codimension one, i.e. spaces of solutions of a single homogeneous equation $ a_1x_1+a_2x_2+\cdots +a_nx_n=0 $ in the coordinates $x_i,i=1,\ldots,n$. Therefore a single subspace will intersect the infinite set $ S=\{(1,t,t^2,\ldots,t^{n-1})\mid t\in k\} $ at finitely many points, because the polynomial $a_1+a_2t+a_3t^2+\cdots+a_nt^{n-1}$ has at most $n-1$ zeros.

Therefore it is impossible to cover all of $S$, hence all of $V$, with finitely many subspaces.

Note that if $k$ is uncountable, then this argument shows that we need uncountably many subspaces.

  • 0
    I had a recollection of having used this argument earlier here. Somewhat against my expectations that other question was not tagged with *finite-fields*, so I didn't find it. Today another similar question was asked, and joriki found my answer [here.](http://math.stackexchange.com/a/60719/11619) I am switching this to CW, for it is surely bad form to try and collect upvotes for the same answer in two different locations. The other answer is more comprehensive, so this will have to go. Sorry about this.2012-05-25
16

Do you know the proof that the union of two subspaces of a vector space is a subspace if and only if one of the two subspaces is contained in the other? If the field is infinte one can come up with a similar proof for your statement.

Assume $V$ is covered by finitely many $V_i$, and assume that the cover is minimal. Then there is wlog a $v\in V_1$ which is not in any other $V_i$ and there is also wlog a $w\in V_2-V_1$. Then the vectors $av+w$ for $0\neq a\in k$ (where $k$ is the base field) are in pairwise different spaces $V_i$. Indeed if $av+w$ and $bv+w$ both are in $V_i$, then so is $(a-b)v$ which is a contradiction. Since $k$ is infinite this proves your statement.

  • 0
    wonderful argument!!! thank you.2017-05-12
10

Let me prove that that if $k$ an infinite field, a finite number of hyperplanes $H_1,\ldots, H_r$ can't cover the vector space $k^n$ .

If we had $k^n=\bigcup_{i=1}^{r} H_i$ where $H_i$ is the kernel of the non-zero linear form $l^{(i)}(x_1,\ldots,x_n)=\sum_{j=1}^{n}a_j^{(i)}x_j\in (k^n)^\ast$ , the degree $r$ polynomial $P(x_1,\ldots,x_n)=\prod_{i=1}^{r} l^{(i)}(x_1,\ldots,x_n)\in k[x_1,\ldots,x_n]$ would vanish at all points of $k^n$ without being the zero polynomial .
This is well known to be impossible if $k$ is infinite: Jacobson Theorem 2.19, page 136.

  • 0
    (When I wrote that 300 pages of text for a serious year-long course is a bit compressed, I couldn't help but think of http://math.uga.edu/~pete/2400full.pdf, the lecture notes from a course I just finished for **first year** undergraduates. But my approach to this material scared off some quite strong students, which is really not the American way, and I am not so comfortable with that aspect of it myself.)2012-05-16
6

This question was asked on MathOverflow several years ago and received many answers: please see here.

One of these answers was mine. I referred to this expository note, which has since appeared in the January 2012 issue of the American Mathematical Monthly.

  • 2
    @Simon: yes; I believe you; and I upvoted your answer. :)2012-05-16
5

This is a special case of a fact that an affine space over an infinite field is irreducible. The proof can be found in most books on elementary algebraic geometry(see for example Fulton's algebraic curves).

  • 2
    +1: you are warming the heart of any algebraic geometer! The proof is indeed in Fulton's book, Chapter 1, §5 , Proposition 1.2012-05-16
3

Having been thinking about functional analysis for the past week, Baire's category theorem came to mind, but unfortunately this assumes the field is $\mathbb{R}$ or $\mathbb{C}$:

A finite dimensional linear subspace is closed, and a proper linear subspace has empty interior. So by Baire, a countable union of finite dimensional linear subspaces again has empty interior; in particular it is not the whole space.

(I believe the first sentence is still true in the generality of $V$ being a topological vector space. However to apply Baire to $V$ we need it to be locally compact Hausdorff (i.e. finite dimensional, by Riesz) or completely metrizable (e.g. a Frechet or even F- space).