0
$\begingroup$

This is part of the exercise $8$ in page $216$ of Analysis I of Amann and Escher.

Show, by example, that if $\rho_a,\rho_b>0$ is possible that $\rho_{ab}>\max\{\rho_a,\rho_b\}$, were $\rho_c$ is the radius of convergence of a power series $c:=\sum c_k X^k$.

In other words: I must found an example where if I define $a:=\sum a_k X^k$, and $b:=\sum b_k X^k$ and $c:=\sum c_k X^k$ with

$$\left(\sum a_k X^k\right)\left(\sum b_k X^k\right)=\sum c_k X^k$$

and $c_k=\sum_{k=0}^n a_kb_{n-k}$, then

$$\rho_c>\max\{\rho_a,\rho_b\}$$

I tried a lot of combinations but I found nothing. In particular if we define $a_k=\alpha ^k$ and $b_k=\beta ^k$ for any $\alpha,\beta\in\Bbb R$ we have that

$$\rho_c=\max\{\rho_a,\rho_b\}$$

Do you know some elementary example for this exercise?

  • 0
    Do you know a little complex analysis? What behaviour of the function determines the radius of convergence of its power series?2017-01-31
  • 0
    @DanielFischer ops, I was searching examples for real coefficients... maybe this is harder or impossible. Let me think something with complex coefficients.2017-01-31
  • 1
    *Hint.* You can exploit the identity $\sqrt{1+ x} \cdot \frac{1}{\sqrt{1+x}} = 1$.2017-01-31
  • 2
    Whether the coefficients are real or complex doesn't matter. The point is that if you look at the function with complex arguments, there is a clear connection between the behaviour of the function and the radius of convergence. That connection makes it easy to construct examples. If you look only at real arguments, the reason for the radius of convergence being what it is is not obvious at all. So if you know a little complex analysis, what determines the radius of convergence?2017-01-31
  • 0
    Complementing Daniel Fischer's comment, the radius of convergence is the maximal radius of the disk on which the function is analytically continuated. This is the point where you begin to see obstructions that prevent further continuation, a.k.a. singularities. In view of this fact, the problem is asking you to find an example where the multiplication of two functions cancels out those singularities. I gave one example, where the branch-cut singularities are cancelled out.2017-01-31
  • 0
    Well, I appreciate all your help but I must says that this exercise comes in a book where there is little complex analysis, branch-cuts and so on. Indeed this exercise comes before to the theory about derivatives, Taylor expansion or the concept of analytic function. Then I assume that must exists something not so hard as an example.2017-01-31
  • 0
    Poles are simpler than branch points, and it's easy to see that a pole is compensated for by a zero of appropriate order of the other factor, so you could look at something like $f(x) = \frac{1+x}{1-x}$ and $g(x) = \frac{1-x}{1+x}$. But coming up with such examples would still require more insight into the radius of convergence than it seems can be expected at this point.2017-01-31
  • 0
    I suspect you misinterpreted the exercise, and the goal is to find series such that the radius of convergence of the series $\sum a_kb_kX^k$ is larger than the radius of convergence of $\sum a_k X^k$ and that of $\sum b_k X^k$. Such examples are easy to find if you know some ways to determine the radius of convergence from the coefficients (like the ratio formula or the Cauchy-Hadamard formula). That also would fit well with $\rho_{ab} > \max \{ \rho_a,\rho_b\}$, because for the Cauchy product you only have $\rho_c \geqslant \min \{\rho_a,\rho_b\}$, not $\geqslant \max$.2017-01-31
  • 0
    @DanielFischer it is not a misunderstanding, the context, with the theory, is very clear about the meaning of $\rho_{ab}$ as the radius of convergence of the (Cauchy) product of power series $a$ and $b$.2017-02-01

1 Answers 1

2

Let me demystify the excellent example given by Daniel Fischer to the level of pure calculus. Let $(a_n)$ and $(b_n)$ be defined by

$$ a_n = \begin{cases} 1, & n = 0 \\ 2, & n \geq 1 \end{cases} \qquad \text{and} \qquad b_n = (-1)^n a_n. $$

It is easy to check that $\rho_a = \rho_b = 1$. On the other hand, $c_0 = a_0 b_0 = 1$ and for $n \geq 1$ we have

\begin{align*} c_n &= \sum_{k=0}^{n} a_k b_{n-k} \\ &= \color{red}{(-1)^n \cdot 2} + \color{blue}{(-1)^{n-1} \cdot 4} + \cdots + \color{red}{(-1) \cdot 4} + \color{blue}{2} \\ &= (-1)^n \cdot 2 - 2 + 4 \sum_{k=0}^{n-1} (-1)^k \\ &= (-1)^n \cdot 2 - 2 + 4 \cdot \frac{1 - (-1)^n}{1 - (-1)} \\ &= 0. \end{align*}

Therefore $\rho_c = \infty$ and we have $\rho_c > \max\{\rho_a, \rho_b\}$.

  • 0
    The radius of convergence of a polynomial is defined as infinity? It make sense, of course, but we can derive it from the Hadamard formula for the radius of convergence, right? In other words, it is a convention?2017-02-01