7
$\begingroup$

Now I think the lhs can be rewritten as $\sum _{n=1}^\infty \left( \dfrac {1} {2n-1}-\dfrac {1} {2\cdot 3^{n-1}}-\dfrac {1} {2^{n+1}}\right) =\dfrac {1} {2}\log 2$

I guess one way to do this may be to start from RHS and try to bring up an expression which is same as the LHS. so using log series expansion $\frac {1} {2}\log 2 = \log \sqrt {2} = \sum _{n=1}^\infty \dfrac {\left( -1\right) ^{n+1}} {n}\left( \sqrt {2}-1\right) ^{n}$

Now Binomial expansion Theorem seems to be making a calling here. $\sum _{n=1}^{\infty }\dfrac {\left( -1\right) ^{n+1}} {n}\left( \sqrt {2}-1\right) ^{n} = \sum _{n=1}^{\infty }\dfrac {\left( -1\right) ^{n+1}} {n}\sum _{k=0}^{k=n}C_{k}^{n} \sqrt {2}^{k}\left( -1\right) ^{n-k} $

I am unsure how to proceed from here and even if i am going down the right path. Any help would be much appreciated.

  • 0
    @JakeR No worries mate, thanks for your help too !2012-03-10

4 Answers 4

10

Rewrite $1-\dfrac {1} {2} -\dfrac {1} {4}+\dfrac {1} {3}-\dfrac {1} {6}-\dfrac {1} {8}+\dfrac {1} {5}-\cdots$ as

$\left(1-\dfrac {1} {2}\right) -\dfrac {1} {4}+\left(\dfrac {1} {3}-\dfrac {1} {6}\right)-\dfrac {1} {8}+\left(\dfrac {1} {5} - \dfrac{1}{10}\right) - \dfrac{1}{12} \ldots$

or

$\frac{1}{2} - \frac{1}{4} + \frac{1}{6} - \frac{1}{8} + \frac{1}{10} - \frac{1}{12} + \cdots = \frac{1}{2}\left(\frac{1}{1} - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \cdots\right) = \frac{1}{2} \log(2).$

Edit: More generally (without numbers), we write the left hand side as $\displaystyle\sum_{k=1}^\infty \frac{1}{2k-1} - \frac{1}{4k-2} - \frac{1}{4k} $ which is equal to $\displaystyle\sum_{k=1}^\infty \frac{1}{4k-2}- \frac{1}{4k} $ (since $\dfrac{1}{2k-1} - \dfrac{1}{4k-2} = \dfrac{1}{2k-1} - \left(\dfrac{1}{2k-1}\right)\dfrac{1}{2} = \left(\dfrac{1}{2k-1}\right)\dfrac{1}{2}$).

This is equal to $\left(\dfrac{1}{2}\right)\displaystyle\sum_{k=1}^\infty \frac{1}{2k-1}- \frac{1}{2k} $ $= \dfrac{1}{2}\displaystyle\sum_{k=1}^\infty \dfrac{(-1)^{k+1}}{k}$, or $\dfrac{\log(2)}{2}$ by examination of the power series of $\log$.

7

The specific series of the question has been addressed by other answers. I am not including proofs, but I figure some additional information may be useful.

I recently learned from Marion Scheepers of some interesting results of which this is a particular case. The question of rearranging conditional series was carefully studied in the XIX and early XX centuries, and there seems to be a resurgence of interest in recent years. All I knew until a few months ago was Riemann's rearrangement theorem.

Here are some particular cases of what follows: We have (from Leibniz test) that the following three series converge conditionally: $ \sum_{n=1}^\infty\frac{(-1)^{n-1}}n,\quad\sum_{n=1}^\infty\frac{(-1)^{n-1}}{\sqrt n},\quad\sum_{n=2}^\infty\frac{(-1)^n}{n\ln n}. $ However, the rearrangement $ 1-\frac12-\frac14+\frac13-\frac16-\frac18\dots $ converges to $1/2$ of the original series; the rearrangement $ \frac1{2\ln2}-\frac1{3\ln3}-\frac1{5\ln5}+\frac1{4\ln4}-\frac1{6\ln6}-\frac1{8\ln8}\dots $ converges to exactly the same value as the original series; and the rearrangement $ 1-\frac1{\sqrt2}-\frac1{\sqrt4}+\frac1{\sqrt3}-\frac1{\sqrt6}-\frac1{\sqrt8}\dots $ diverges. One would imagine that either this is all very chaotic, or else there is some underlying theory worthy of study.

The following is Martin Ohm's theorem from 1839:

For $p$ and $q$ positive integers rearrange the sequence $\left(\frac{(−1)^{n-1}} n\right)_{n\ge 1} $ by taking the first $p$ positive terms, then the first $q$ negative terms, then the next $p$ positive terms, then the next $q$ negative terms, and so on. The rearranged series converges to $ \ln(2) + \frac12 \ln\left( \frac pq \right). $

This was generalized by Oscar Schlömilch in 1873 as follows:

Let's say that a sequence $c_1,c_2,\dots$ of real numbers is signwise monotonic iff the sequence $|c_1|,|c_2|,\dots$ of absolute values is monotone. Write $a_1,a_2,\dots$ for the subsequence of positive terms, and $-b_1,-b_2,\dots$ for the subsequence of negative terms.

Let $(c_n)_{n\ge1}$ be a signwise monotonic and suppose that $\sum_{n=1}^\infty c_n$ is conditionally convergent. For $p$ and $q$ positive integers, rearrange the sequence by taking the first $p$ positive terms, then the first $q$ negative terms, and so on. The rearranged series converges to $ \sum_{n=1}^\infty c_n + g\ln\left(\frac pq\right), $ where $g$ is the limit $\lim_{n\to\infty}n a_n$, if it exists.

Soon after, Alfred Pringsheim obtained some general results. In 1883 he proved three theorems (one of which was discovered defective as a consequence of Marion's work). Here is a quick summary of some of Marion's results, that extend Pringsheim's work:

For signwise monotone sequences $c_1,c_2,\dots$, given a subset $A\subseteq{\mathbb N}$, let $f_A(n)=a_j$ if $n$ is the $j$th element of $A$, and $f_A(n)=-b_j$ if $n$ is the $j$th element of ${\mathbb N}\setminus A$.

Write $d(A)$ for $\displaystyle \lim_{n\to\infty}\frac {|\{k\in A\mid k\le n\}|}n$, if it exists.

Let $0. Then the following are equivalent for a signwise monotonic sequence converging to 0:

  1. $\lim_{n\to\infty}n a_n=\infty$, and there is some $A\subseteq{\mathbb N}$ such that $d(A)=x$ and $\sum_{n=1}^\infty f_A(n)$ converges.

  2. $d(B)=x$ for each set $B$ such that $d(B)$ exists and $\sum f_B$ converges.

A curious "anti-rearrangement" result holds for this density notion:

If there is more than one $x$ such that $0 and for some $A$, $d(A)=x$ and $\sum_n f_A(n)$ converges, then for any $B,C$, if $d(B)$ exists, $d(B)=d(C)$, and $\sum_n f_B(n)$ converges, then $\sum_n f_C(n)$ also converges, and $ \sum_n f_B(n)=\sum_n f_C(n). $

In this case, call $\Phi(x)$ the value $\sum_n f_B(n)$ for any $B$ with $d(B)=x$ (if this sum converges for some such $B$, otherwise, $\Phi(x)$ is undefined).

Then we have:

Suppose we have a signwise monotone sequence converging to 0. Let $t\in{\mathbb R}$ be a fixed real. The following are equivalent:

  1. $\lim_n n\cdot a_n=0$, and there is some $x$ with $0 and there is some $A$ such that $d(A)=x$ and $\sum_n f_A(n)=t$.

  2. For each $B$, if $d(B)$ exists and $0, then $\sum_n f_B(n)=t$.

  3. $\Phi(x)=t$ for all $x\in(0,1)$.

  • 0
    Mate it's my birthday today and the wife expects some time and to do this one right would require some work, so would post it tom$z$ Stay tuned.2012-03-11
6

I think your left hand side should be interpreted as $\sum_{k=1}^{\infty} \left( \frac1{2k-1} - \frac1{4k-2} - \frac1{4k} \right).$ Then note that $a_k = \frac1{2k-1} - \frac1{4k-2} - \frac1{4k} = \frac1{2(2k-1)} - \frac1{4k} = \frac12 \left( \frac1{2k-1} - \frac1{2k} \right).$ Do you see it now why it is $\frac12 \log(2)$?

  • 0
    +1 because it doesn't give too much away (making the reader figure out the exact connection to log(2), which is in their benefit) while giving a general strategy for the future to analyze seemingly-unfamiliar sums by breaking it into familiar ones. Nice answer.2012-03-10
2

Double Series Let $u_{m,n}$ be a number determinate for all positive integral values of m and n; consider the array $u_{1,1},u_{1,2},u_{1,3},\ldots$

$u_{2,1},u_{2,2},u_{2,3},\ldots$

$u_{3,1},u_{3,2},u_{3,3},\ldots$

Let the sum of the terms inside the rectangle, formed by the first $m$ rows of the first $n$ columns of this array of terms, be denoted by $S_{m,n}$.

If a number $S$ exists such that, given any arbitrary positive number $\epsilon$, it is possible to find integers $m$ and $n$ such that $\left| S_{\mu ,v} -S\right| < \epsilon $ whenever both $\mu > m$ and $v>n$, we say that the double series of which the general element is $u_{\mu ,v}$ converges to the sum $S$,and we write $\lim _{\mu\rightarrow \infty ,v\rightarrow \infty }S_{\mu,v}=S$. This definition is practically due to Cauchy.

If the double series, of which the general element is $\left|u_{\mu,v}\right|$, is convergent, we say that the given double series is absolutely convergent.

Since $u_{\mu,v} = \left(S_{\mu,v} - S_{\mu,v-1}\right)\left(S_{\mu - 1,v} - S_{\mu -1,v -1}\right)$, it is easily seen that, if the double series is convergent, then $\lim _{\mu\rightarrow \infty ,v\rightarrow \infty }u_{\mu ,v}=0.$

Stolz'necessary and sufficient condition for convergence (But first proven by Pringsheim). A condition for convergence which is obviously necessary is that, given $\epsilon$, we can find m and n such that $\left|S_{\mu + \rho,v+\sigma} - S_{\mu,v-1}\right| < \epsilon$ whenever $\mu > m$ and $v>n$ and $\rho,\sigma$ may take any of the values 0,1,2,....

The condition is also sufficient; for, suppose it satisfied; then, when $\mu>m +n$,$\left|S_{\mu + \rho,v+\rho} - S_{\mu,\mu}\right| < \epsilon$. Therefore, $S_{\mu,\mu}$ has a limit $S$; then, making $\rho$ and $\sigma$ tend to infinity in such a way that $\mu + \rho = v + \sigma$, we see that $\left|S - S_{\mu,v}\right|\leq \epsilon$ whenever $\mu > m$ and $v >n$; that is to say, the double series converges.

An absolutely convergent double series is convergent. For if the double series converges absolutely and if $t_{m,n}$ be the sum of $m$ rows and $n$ columns of the series of moduli, then,given $\epsilon$, we can find $\mu$ such that, when $p>m>\mu$ and $q>n>\mu$,$t_{p,q}-t_{m,n}<\epsilon$, but $\left| S_{p,q}-S_{m,n}\right| \leq t_{p,q}-t_{m,n}$ and so $\left| S_{p,q}-S_{m,n}\right| < \epsilon $ when $p>m>\mu, q>n>\mu$; and this is the condition that the double series should converge.

Methods of Summing double Series Let us suppose that $\sum _{v=1}^{\infty }U_{\mu,v}$ converges to the sum $S_{\mu}$. Then $\sum _{\mu=1}^{\infty }S_{\mu}$ is called the sum by rows of the double series; that is to say, the sum by rows is $\sum _{\mu =1}^{\infty }\left( \sum _{v=1}^{\infty }u_{\mu },v\right) $. Similarly, the sum by columns is defined as $\sum _{v=1}^{\infty }\left( \sum _{\mu =1}^{\infty }u_{\mu },v\right) $. That these two sums are not necessarily the same is shown by the example $S_{\mu ,v}=\dfrac {\mu - v } {\mu+v}$, in which the sum by rows is -1, the sum by columns is +1; and S does not exist.

Pringsheim's Theorem: If $S$ exists and the sums by rows and sum by columns exist; then each of these sums is equal to S. For since $S$ exists, then we can find $m$ such that $\left| S_{\mu ,v} -S\right| < \epsilon $ whenever both $\mu > m$ and $v>m$. And therefore, since $\lim _{v\rightarrow \infty }S_{\mu ,v}$ exists, $\left| \left( \lim _{v\rightarrow \infty }S_{\mu ,v}\right) -S\right| \leq \epsilon$; that is to say, $\left| \sum _{p=1}^{\mu }S_{p}-S\right| \leq \epsilon $ when $\mu>m$ and so the sum by rows converges to $S$. In a like manner the sum of columns converge to $S$.

  • 0
    It got a bit long so instead of posting it as comments i think it would be better suited as an answer submission.2012-03-11