26
$\begingroup$

I'm reading Galois Theory by Steven H. Weintraub (second edition), and finding that I'm at least somewhat short on the prerequisites. However the following proof looks wrong to me - am I misunderstanding something, or is it actually an incorrect proof?

Lemma 2.2.3. Let $F$ be a field and $R$ an integral domain that is a finite-dimensional $F$-vector space. Then $R$ is a field.

Proof. We need to show that any nonzero $r \in R$ has an inverse. Consider $\{1, r, r^2, \cdots\}$. This is an infinite set of elements of $R$, and by hypothesis $R$ is finite dimensional as an $F$-vector space, so this set is linearly dependent. Hence $\sum_{i=0}^n{c_i r^i} = 0$ for some $n$ and some $c_i \in F$ not all zero.

It then goes on to show, given the above, that we can derive an inverse for $r$.

However, if I consider examples like $r = 2 \in Q[\sqrt{2}]$, $r = \sqrt{2} \in Q[\sqrt{2}]$ or $r = 2 \in Q[X]/{}$, the set $\{1, r, r^2, ...\}$ doesn't look linearly dependent to me.

I do believe the lemma is true (and might even be able to prove it), but this does not look like a correct proof to me. Am I missing something?

[Edit] Well yes, I am. Somehow I had managed to discount the possibility of any $c_i$ being negative, despite repeatedly looking at each fragment of the quoted text in an attempt to find what I might be misunderstanding.

  • 4
    You should check if the set *is* linearly dependent or not... For example, if $r=\sqrt2$ in $\mathbb Q[\sqrt2]$, is the set $\{1,r,r^2,\dots\}$ linearly independent or not? Notice I am not asking if you believe it is, or if it looks so, but if it is :)2011-09-12
  • 2
    Note that $\mathbb Q[X]/\langle X^2\rangle$ is not an integral domain (since $XX=0$ there), so the lemma is not supposed to hold there.2011-09-12
  • 0
    You probably meant $Q[X]/\langle X^2-2\rangle$... still true that $\{1,2,4,8,\ldots\}$ ( or perhaps $\{1+(X^2-2), 2+(X^2-2), 4+(X^2-2),\ldots\}$) is $\mathbb{Q}$-linearly dependent.2011-09-12
  • 0
    @Henning: ah sorry, clearly I need to look again at precisely what $\mathbb{Q}[X]/$ means. But it turns out my question was deeply misguided in any case (or at least, revolved solely around the "what have I misunderstood" part).2011-09-12
  • 0
    For some extended discussion see my [answer here.](http://math.stackexchange.com/a/100201/242) See also this [duplicate question.](http://math.stackexchange.com/q/146857/242)2012-05-19

3 Answers 3

8

$\{1,2,4,8,\ldots\}$ is certainly $\mathbb{Q}$-linearly dependent in $\mathbb{Q}[\sqrt{2}]$; in fact, it is linearly dependent in $\mathbb{Q}$ already! $0 = 2(1) -1(2)$, with the elements in parentheses being the vectors. So this is a nontrivial linear combination of the vectors in the set which is equal to $0$.

For $\sqrt{2}$, the set is $\{1,\sqrt{2},2,2\sqrt{2},4,\ldots\}$. Again, this is $\mathbb{Q}$-linearly dependent, since $0 = 2(1) + 0(\sqrt{2}) -1(2)$. Again, this is a nontrivial linear combination of the vectors in the set which is equal to $0$.

What is it that makes it look "not dependent" to you?

  • 0
    @Henning: Sigh; thanks.2011-09-12
  • 0
    Stupidity, mainly, perhaps with a bit of "out of practice" thrown in. Somehow I had it fixed in my head that the sum could not be zero without the powers somehow "wrapping around" past infinity, such as they might in modular arithmetic. Thanks. :)2011-09-12
54

Weintraub's 15-line proof is correct but clumsy. Here is a 2-line proof:

Given $0\neq r\in R$ the $F$-linear map $R\to R:x\mapsto rx$ is injective ($R$ is an integral domain!), hence surjective ($R$ is finite-dimensional!). So $1$ is the image of some $s\in R$, i.e. $sr=1$ and so $s=r^{-1}$ belongs to $R$.

  • 0
    Nice simple proof indeed but does it answer the question as asked?2011-09-12
  • 24
    @lhf: The OP asks twice (line 3 and last line before the edit) if Weintraub's proof is correct and I have answered that it is. As for his other questions, there is no need to repeat what has been well explained by Arturo and the commentators. On the other hand I thought it could be psychologically useful for the OP to know that his difficulties are due in great part to the suboptimal quality of the proposed proof: students tend to think that it is always their fault if they fail to understand something. That is false.2011-09-12
  • 0
    @GeorgesElencwajg :Is $r$ taken from $R$ ?2015-03-13
  • 0
    Dear @Saun: yes. I have edited my answer to make that explicit.2015-03-13
  • 3
    @GeorgesElencwajg : But then can you please explain why $x \to rx$ is $F$-linear ? I am having trouble with the homogeneous part ; let $\alpha \in F$ , then $f(\alpha .x)=r(\alpha . x)=(\alpha.x)r$ (as $R$ is assumed to be commutative) and $\alpha.f(x)=\alpha .(rx)=\alpha.(xr)$ , I am unable to see then how $f(\alpha .x)$ and $\alpha.f(x)$ are equal . Please help2015-03-14
  • 0
    Sorry, I fail to understand why the condition that $R$ be finite-dimensional implies the surjectivity of the map. Could you explain it please?2016-10-14
  • 0
    Any injective endomorphism of a finite-dimensional vector space is surjective. This is basic linear algebra: it is proved in most linear algebra textbooks.2016-10-14
  • 0
    Dear @Georges: sorry to comment on an old post, but I too am concerned about the issue raised in user217921's comment. It seems that this answer works if $R$ is an $F$-algebra, whereas Weintraub's proof (while indeed clumsier) only uses that $R$ is an $F$-vector space. Perhaps I am mistaken, but it is not clear to me that the vector space multiplication should always ''commute'' with ring multiplication. Could you please weigh in? Thanks!2018-08-30
  • 0
    Dear @Alex Yes, vector space multiplication definitely "commutes" with ring multiplication: this is part of the [definition of an algebra](https://en.wikipedia.org/wiki/Algebra_over_a_field#Definition). All books I know give the same definition as Wikipedia in the link above . From what book did you study the definition of algebra?2018-08-30
  • 0
    Dear @Georges: I'm sorry to have been unclear. My definition of algebra agrees with those listed on Wikipedia, and I am aware that if $R$ is an $F$-algebra, the two structural multiplications commute. The point I had hoped to make was that it does not appear in the theorem statement (at least as transcribed by the OP) that $R$ is given to be an $F$-algebra, but only the weaker structure of an $F$-vector space. Perhaps this concern is needlessly pedantic, since most of the rings which are vectors spaces in my life arise as algebras over a field, but I was merely wondering whether or not...2018-08-30
  • 0
    ...your answer still applies in this weaker setting. (Although, upon reflection, it seems that the proof that Weintraub had in mind also needs these multiplications to commute, so I retract my earlier remark on this.)2018-08-30
13

Another easy solution. I will explicitly construct the inverse.

Since $R$ is finite dimensional over $F$, we have $\{1,r,r^{2},...,r^{n}\}$ is a linear dependent set for some finite $n$ over $F$. In particular, if $r \neq 0$ and $r \in R$, then $a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots+a_{0} =0$ has a nontrivial solution where each $a_{i} \in F$. If $a_{0}=0$ then $$a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots+a_{1}r=0 \implies r(a_{n}r^{n-1}+a_{n-1}r^{n-2}+\cdots+a_{1})=0 \implies a_{n}r^{n-1}+a_{n-1}r^{n-2}+\cdots+a_{1}=0$$ since $R$ is an integral domain. If $a_{1}=0$ repeat the previous step. Clearly this process will terminate once we get to some nonzero $a_{i}$. Therefore we may assume WLOG that $a_{0} \neq 0$. But then $$a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots+a_{0} =0 \implies a_{n}r^{n}+a_{n-1}r^{n-1}+\cdots+a_{1}r=-a_{0} \implies b_{n}r^{n}+b_{n-1}r^{n-1}+\cdots+b_{1}r=r(b_{n}r^{n-1}+b_{n-1}r^{n-2}+\cdots+b_{1}) =1$$ where $b_{i}=-a^{-1}_{0}a_{i}$, showing that $r$ has an inverse in $R$.