27
$\begingroup$

I'm reading Galois Theory by Steven H. Weintraub (second edition), and finding that I'm at least somewhat short on the prerequisites. However the following proof looks wrong to me - am I misunderstanding something, or is it actually an incorrect proof?

Lemma 2.2.3. Let $F$ be a field and $R$ an integral domain that is a finite-dimensional $F$-vector space. Then $R$ is a field.

Proof. We need to show that any nonzero $r \in R$ has an inverse. Consider $\{1, r, r^2, \cdots\}$. This is an infinite set of elements of $R$, and by hypothesis $R$ is finite dimensional as an $F$-vector space, so this set is linearly dependent. Hence $\sum_{i=0}^n{c_i r^i} = 0$ for some $n$ and some $c_i \in F$ not all zero.

It then goes on to show, given the above, that we can derive an inverse for $r$.

However, if I consider examples like $r = 2 \in Q[\sqrt{2}]$, $r = \sqrt{2} \in Q[\sqrt{2}]$ or $r = 2 \in Q[X]/{}$, the set $\{1, r, r^2, ...\}$ doesn't look linearly dependent to me.

I do believe the lemma is true (and might even be able to prove it), but this does not look like a correct proof to me. Am I missing something?

[Edit] Well yes, I am. Somehow I had managed to discount the possibility of any $c_i$ being negative, despite repeatedly looking at each fragment of the quoted text in an attempt to find what I might be misunderstanding.

  • 0
    For some extended discussion see my [answer here.](http://math.stackexchange.com/a/100201/242) See also this [duplicate question.](http://math.stackexchange.com/q/146857/242)2012-05-19

3 Answers 3

8

$\{1,2,4,8,\ldots\}$ is certainly $\mathbb{Q}$-linearly dependent in $\mathbb{Q}[\sqrt{2}]$; in fact, it is linearly dependent in $\mathbb{Q}$ already! $0 = 2(1) -1(2)$, with the elements in parentheses being the vectors. So this is a nontrivial linear combination of the vectors in the set which is equal to $0$.

For $\sqrt{2}$, the set is $\{1,\sqrt{2},2,2\sqrt{2},4,\ldots\}$. Again, this is $\mathbb{Q}$-linearly dependent, since $0 = 2(1) + 0(\sqrt{2}) -1(2)$. Again, this is a nontrivial linear combination of the vectors in the set which is equal to $0$.

What is it that makes it look "not dependent" to you?

  • 0
    Stupidity, mainly, perhaps with a bit of "out of practice" thrown in. Somehow I had it fixed in my head that the sum could not be zero without the powers somehow "wrapping around" past infinity, such as they might in modular arithmetic. Thanks. :)2011-09-12
59

Weintraub's 15-line proof is correct but clumsy. Here is a 2-line proof:

Given $0\neq r\in R$ the $F$-linear map $R\to R:x\mapsto rx$ is injective ($R$ is an integral domain!), hence surjective ($R$ is finite-dimensional!). So $1$ is the image of some $s\in R$, i.e. $sr=1$ and so $s=r^{-1}$ belongs to $R$.

  • 0
    ...your answer still applies in this weaker setting. (Although, upon reflection, it seems that the proof that Weintraub had in mind also needs these multiplications to commute, so I retract my earlier remark on this.)2018-08-30
13

Another easy solution. I will explicitly construct the inverse.

Since $R$ is finite dimensional over $F$, we have $\{1,r,r^{2},...,r^{n}\}$ is a linear dependent set for some finite $n$ over $F$. In particular, if $r \neq 0$ and $r \in R$, then $a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots+a_{0} =0$ has a nontrivial solution where each $a_{i} \in F$. If $a_{0}=0$ then $a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots+a_{1}r=0 \implies r(a_{n}r^{n-1}+a_{n-1}r^{n-2}+\cdots+a_{1})=0 \implies a_{n}r^{n-1}+a_{n-1}r^{n-2}+\cdots+a_{1}=0$ since $R$ is an integral domain. If $a_{1}=0$ repeat the previous step. Clearly this process will terminate once we get to some nonzero $a_{i}$. Therefore we may assume WLOG that $a_{0} \neq 0$. But then $a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots+a_{0} =0 \implies a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots+a_{1}r=-a_{0} \implies b_{n}r^{n}+b_{n-1}r^{n-1}+\cdots+b_{1}r=r(b_{n}r^{n-1}+b_{n-1}r^{n-2}+\cdots+b_{1}) =1$ where $b_{i}=-a^{-1}_{0}a_{i}$, showing that $r$ has an inverse in $R$.