11
$\begingroup$

I would like to show that if $R$ is a field, then $R(x)$ is a proper subset of $R((x))$, where $R(x)$ is the ring of rational functions, and $R((x))$ is the ring of formal Laurent series.

If $f \in R(x)$, then $f(x) = f_1(x)f_2^{-1}(x)$, where $f_1(x), f_2(x) \in R[x]$. So I wrote this as $$f(x) = \frac{\sum_{i=0}^{n}a_ix^i}{\sum_{j=0}^{m}b_jx^j}\;,$$ and I would like to show that I can write $f$ in the form $\sum_{k=r}^{\infty}c_kx^k$. However, I am unsure how to manipulate $f$ in order to show this. What I was thinking was to find some formal power series expansion for $f_2^{-1}(x)$, multiply out the summation with $f_1(x)$, then rearrange the coefficients and terms to obtain the desired form. However, I can't seem to derive a formula for the inverse of a polynomial in general that I could use for this. How can I go about manipulating $f_2^{-1}(x)$ to show this? Any suggestions?

Thanks!

  • 2
    Your main concern seem sto be with *subset*, but you als have to show *proper*. For the latter, consider the power series for $\sin(x)$, for example, and observe that a rational function would have only finitely many zeroes.2012-11-03
  • 3
    @Hagen: The power series of $\sin x$ may not exist, if all those factorials are not invertible in $R$, or equivalently if $R$ has a positive characteristic.2012-11-03
  • 0
    @JyrkiLahtonen: You're right that it may not exist for any ring R, but for my purpose, I can assume that R is a field. So every non-zero element should have an inverse :)2012-11-03
  • 3
    @user43552: Yes, but Jyrki is right. That all nonzero elements are invertible does not mean that e.g. $2$ is invertible because we might have $2=0$. Thus you need a different approach to show that the subset is *proper*. I suggest $$f(x)=\sum_{n=0}^{\infty} x^{n!}.$$ Whatever polynomial $q(x)$ you assume as denominator, the gappy high order terms of $q(x)f(x)$ do not cancel, hence $q(x)f(x)$ is not a polynomial.2012-11-03
  • 4
    @Hagen: You might flesh that counterexample out to an answer.2012-11-04

5 Answers 5

5

HINT: Write $f_2(x)$ in the form $x^rg(x)$, where $g$ has a non-zero constant term. Then $g(x)$ has an inverse in $R[[x]]$.

An easy induction shows that its coefficients can be calculated recursively: just start calculating! For instance, if $g(x)=a_0+a_1x+\ldots+a_mx^m$, and the inverse is to be $h(x)=\sum_{k\ge 0}b_kx^k$, it’s clear that you want $b_0=a_0^{-1}$. Then the first degree term in $g(x)h(x)$ must be $$(a_0b_1+a_1b_0)x=(a_0b_1+a_0^{-1}a_1)x\;,$$

so $a_0b_1+a_0^{-1}a_1=0$, and you can solve for $b_1$. It’s easy to prove that this can be continued recursively.

And from there you’re pretty much home free.

  • 0
    Hmm, I think I understand it now. So writing $f$ as $f_1(x) / x^rg(x)$ gives: $x^{-r}f_1(x)/(\sum_{j=0}^{m-r}b_{j+r}x^{j+r})$. Then using the formula provided for the inverse, $\sum_{j=0}^{m-r}b_{j+r}x^{j+r}$ becomes $\sum_{k=0}^{m-r}c_kx^k$, where $c_k = -1/a_0 \sum_{i=1}^{m-r}a_ib_{m-r-i}$. Then multiplying this gives us the desired formal Laurent series, right?2012-11-03
  • 2
    @user43552: Yes, though you don’t really need to go into all of the gruesome detail: it’s enough to know that the formal power series $h=g^{-1}$ exists, since clearly $fh\in R[[x]]$, and then the factor of $x^{-r}$ gets you your Laurent series.2012-11-03
  • 0
    Okay. I wrote out the details here just to make sure that I understood the argument properly. :) Thanks!2012-11-03
  • 0
    @user43552: What extra coefficients? The inverse $h(x)$ may well have infinitely many non-zero coefficients. (And you’re welcome!)2012-11-03
  • 0
    I'm a little confused as to why the inverse is guaranteed to have infinitely many non-zero coefficients. I can see why this would be the case for a formal power series with infinitely many non-zero coefficients, but not necessarily just a polynomial (as in the case of rational functions).2012-11-03
  • 0
    @user43552: I didn’t say that it’s **guaranteed** to have them, but it certainly **can** have them. Example over $\Bbb Q$: the inverse of the polynomial $1-x$ is $\sum_{k\ge 0}x^k$, with all coefficients equal to $1$.2012-11-03
  • 0
    Oh. I was just trying to say that, if the inverse happened to have a finite amount of non-zero coefficients, then since a Laurent series needs infinitely many terms, I would just have to specify that the additional terms have a coefficient of zero. I think I had misunderstood your last comment though.2012-11-03
  • 0
    @user43552: You probably don’t really need to make that explicit: it’s understood, just as it’s understood that a polynomial is a formal power series whose coefficients are $0$ from some point on.2012-11-03
  • 0
    Got it. - Thank you :)2012-11-03
  • 0
    A discussion in an [identical question](http://math.stackexchange.com/questions/1720517/how-to-prove-that-field-of-rational-functions-is-a-proper-subset-of-field-of-f) points out that this proof does not show why the inclusion is proper.2016-03-30
  • 0
    @Alex: I know. I was answering the specific question asked by the OP, not solving the problem that led to it.2016-03-30
  • 0
    It seems that this properness is addressed by Hagen von Eitzen in the comments under the original question.2016-03-30
  • 0
    @Alex: Yes, it is. And **Hagen** has now even added an answer that deals with that aspect. That’s fine, and doubtless useful for someone coming upon this page in the future. It has, however, no bearing on **my** answer, which, as I said, was for the specific question asked by the OP.2016-04-09
5

In case $R$ is finite or countable, the rational-function field is countable, while the Laurent-series field is uncountable.

3

To show that $R(x)$ is a proper subset of $R((x))$, we first need to ignore the "is". Using more precision, I'd prefer to say that $R(x)$ is canonically isomorphic to a proper subring of $R((X))$.

First part: subset

We have a canonical and straightforward map from the ring $R[x]$ of polynomial to the ring $R((X))$ fo formal Laurent series (this does not even require $R$ to be a field) and accordingly identify polynomials with their corresponding power series. To extend this map to $R(x)$ we need to find, for every non-zero polynomial $f\in R[x]$, a series $u\in R((x))$ such that $f\cdot u=1$. First consider the case that $f$ has constant term $1$. Then we can define $u_i$, $i\in\Bbb N$, recursively such that for all $n$ $$\tag1 f(x)\cdot \sum_{i=0}^nu_ix^i\in 1+x^nR[x]$$ Indeed, we can just let $u_0=1$ and then recursively let $u_n$ be $-1$ times the coefficient of $x^n$ in the polynomial $f(x)\cdot\sum_{i=0}^{n-1}u_ix^i$. We obtain a power series $u(x)$ with $f(x)u(x)=1$ as desired.

Now consider general $f\ne 0$. Then it can be written as $a\cdot x^k\cdot \hat f$ where $a\in R\setminus \{0\}$, $k\in \Bbb N_0$, $\hat f$ is a polynomial with constant term $1$. As just seen, there is a power series $\hat u$ with $\hat f\hat u=1$. Then $u:=a^{-1}x^{-k}\hat u$ is a Laurent series with $fu=1$, as desired. (Here is the only place where we use that $R$ is a field: We need to find $a^{-1}$).

Remark: Actually, it suffices to know that $R((x))$ is itself a field; which by itself can be proved by finding a multiplicative inverse recursively (almost) precisely as above.

Second part: proper

It suffices to exhibit a single formal Laurent series that cannot be written as quotient of polynomials. Consider $$ u(x)=\sum_{k=0}^\infty x^{k^2} $$ and assume that $u=\frac fg$ with $g\ne 0$, say $g(x)=\sum_{j=0}^d a_jx^j$ with $a_d\ne 0$. Pick $m\ge \max\{d,1\}$. Then in multiplying $u(x)g(x)$ we see that the coefficient of $x^{m^2+d}$ equals $a_d$ because $\deg(x^kg) for $k and $x^{m^2+d+1}\mid x^kg$ for $k>m$. Hence $ug$ has infinitely many nonzero coefficients and is not a polynomial.

  • 0
    I'm a bit unsure why the years-old answers were deemed to show not enough detail, but here goes another summary of the proof, this time including also the part about "proper".2016-04-07
  • 0
    I think it was precisely for the reason of the proper containment, but you can click through the linked question to see for yourself the comments by the user which prompted me. Regards2016-04-07
  • 0
    @HagenvonEitzen Thank you very much. Excellent, as usual. I already had the first part, and your second part is crystal clear. +12016-04-08
  • 0
    Don't you want "because $\deg(x^{k^2}g) for $k and $x^{m^2+d+1} | x^{k^2}g$ for $k >m$" in the penultimate sentence?2017-12-19
0

Hint $\rm\displaystyle\quad 1\: =\: (a-xf)(b-xg)\ \Rightarrow\ ab=1$ $$\Rightarrow\ \ \displaystyle\rm\frac{1}{b-xf}\ =\ \frac{a}{1-axf}\ =\ a\:(1+axf+(axf)^2+(axf)^3+\:\cdots\:)$$

  • 2
    Could you explain why this implication/equality follows? I'm afraid that I don't see it =(2012-11-03