I was wondering what is the justification for this step(changing the indexes)$\displaystyle\sum_{n=0}^{\infty}\frac{a^{n}}{n!}\sum_{m=0}^{\infty}\frac{b^{m}}{m!}=\sum_{k=0}^{\infty}\frac{1}{k!}\sum_{n=0}^{k}\frac{k!}{n!(k-n)!}a^{n}b^{k-n}$ , in Rudin's Real and complex analysis prolog to show $(\exp{a})( \exp{b})=\exp{(a+b)}$, is it the same principle that use fubini's theorem for integrals, I mean that one that says if given the domain of integration D=AxB=ExF then I can do something like $\int_{D}=\int_{A}\int_{B}=\int_{E}\int_{F}$ , I would appreciate any hint or reference to this, thanks in advance.
justification on changing indexes in double sum
-
0Yes you are right. It can be seen as Fubini's theorem for series. – 2011-09-28
-
0@RagibZaman but I got a problem seeing if the "domain" of the indexes in the RHS is the same domain in the LHS of the equation. LHS is like all the plane $R^{2}$ and RHS is like the area down the identity function. Am I right? – 2011-09-28
1 Answers
If we let $f_n = \frac{a_n}{n!}$ and $g_m = \frac{b_m}{m!}$, you're asking why $$\displaystyle\sum_{n=0}^{\infty} f_n \sum_{m=0}^{\infty} g_m = \sum_{k=0}^{\infty}\frac{1}{k!}\sum_{n=0}^{k} k! f_n g_{k-n} = \sum_{k=0}^{\infty}\sum_{n=0}^{k}f_n g_{k-n}$$
The reason is that the expression $$\sum_{n=0}^{\infty} f_n \sum_{m=0}^{\infty} g_m $$ is adding up the following numbers row-by-row,
$$ \begin{matrix} f_0 g_0 & f_0 g_1 & f_0 g_2 & f_0 g_3 & \cdots \\ f_1 g_0 & f_1 g_1 & f_1 g_2 & f_1 g_3 & \cdots \\ f_2 g_0 & f_2 g_1 & f_2 g_2 & f_2 g_3 & \cdots \\ f_3 g_0 & f_3 g_1 & f_3 g_2 & f_3 g_3 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{matrix}, $$ while the sum $$\sum_{k=0}^{\infty}\sum_{n=0}^{k}f_n g_{k-n}$$ is adding up the same numbers diagonal-by-diagonal.
Added: The answer above intentionally ignores questions of convergence. It assumes that the OP's question must be about the algebra of why the equality is true, since if there were problems with convergence the equality wouldn't be asserted in Rudin. However, for those concerned about convergence issues here, Merten's theorem can be applied, as wnoise points out below.
-
1And there are no issues with convergence... – 2011-09-28
-
1@wnoise : Why there is no issue with convergence? What justifies that?\ – 2011-09-28
-
1@Arjang I Think wnoise was being sarcastic. – 2011-09-28
-
0@RagibZaman : LOL, I was staring at it trying to remember the double series convergence criterion! – 2011-09-28
-
1I was apparently too elliptical. I meant that this answer is true in the sense that this is the algebra for why we do this, but like any rearrangement of infinite sums it requires that there are no issues with convergence. In this particular case, we have the product of two absolutely convergent series, and Merten's theorem gives convergence for the infinite sum on k of the finite sum on n. I think we can actually show absolute convergence by showing that the magnitude of terms goes to zero quickly enough as $n+m=k$ increases. – 2011-09-28
-
0Actually we can show absolute convergence by evaluating at $a \rightarrow |a|, b \rightarrow |b|$, which give strictly positive terms of the same magnitude, and must also converge, again by Merten's theorem. – 2011-09-28
-
0....and the sum of the entries in each diagonal is a sum with only finitely many terms. – 2011-09-28
-
0@wnoise: The OP's question seemed to be about why the rearrangement works algebraically, so I only chose to deal with that part in my answer. At any rate, thanks for adding why the convergence works, too. – 2011-09-28