3
$\begingroup$

Using the markov chain $\{X_n\}$ on $\mathbb{N}_0$, with transition probabilities $p(x,x+1)=p(x,0)=1/2$ for all $x\in \mathbb{N}_0$, how can we compute (numerically) the sum of the series $$\sum_{n=0}^{\infty}\frac{1}{2^{n+\sqrt{n}}}~?$$

I don t seem to understant how a simulation of our process can approximate the desired sum. Any help will be appreciated.

Thank you in advance.

  • 1
    What is the invariant distribution of the given chain?2017-02-10
  • 1
    The invariant distribution is $\pi(n)=1/2^{n+1},~n\in \mathbb{N}_0.$2017-02-10
  • 1
    Can you write your sum as an average of a function with respect to that invariant distribution?2017-02-10
  • 0
    We know that $\frac{1}{n}\sum_{k=1}^{n}f(X_k)\rightarrow \sum_{n=0}^{\infty}f(n)\pi(n)$ with probability $1$, so by taking $f(n)=2/2^{\sqrt{n}}$ we get an almost everywhere convergence to the desired sum.2017-02-10
  • 1
    Yep; and now you can approximate such an average by simulating the process and averaging the values of $2^{1-\sqrt{n}}$ that you get along the way. This is called Markov Chain Monte Carlo.2017-02-10
  • 1
    If that is the intended answer, then it is quite a disappointment. I Marked this problem as "interesting" because I thought the square root would somehow be related to that Markov chain (but I don't see how). For any ergodic Markov chain $X_k$ on state space $\mathcal{S}=\{0, 1, 2, …\}$ and with steady state $\{\pi_n\}_{n=0}^{\infty}$, we could define the random process $$f(X)= \frac{1}{\pi_{X}} \frac{1}{2^{X+\sqrt{X}}}$$ to get the same thing, but that seems silly. And there would be no reason or advantage for doing this simulation.2017-02-10
  • 0
    @Michael Nah, the point is that this chain is easy to simulate and the $2^{-n}$ part is the dominant scaling of the sum. On the other hand, the invariant distribution that we seek to draw from can be sampled from directly anyway, since the quantile function can be explicitly calculated.2017-02-10
  • 1
    @NikolaosSkout Since you figured out the solution with my guidance, could you answer your own question for us please? (Incidentally, I'm rather happy that this "Socratic method" actually worked this time. I often try it on more straightforward questions, but I often face answers of "I don't know" right off the bat.)2017-02-10
  • 0
    It has been very useful @Ian, thank you very much - I also figured out that I have to check a bit more on the net the Monte Carlo method, which I had never used before. I ll post the answer as soon as possible - you are more than welcomed to comment an anything that is missing.2017-02-10
  • 1
    @Ian : One reason the Socratic method typically fails in stackexchange is that another person jumps in with a full answer.2017-02-10
  • 0
    @NikolaosSkout : There _are_ cases where a value is difficult to obtain, but becomes easy to obtain from behavior of a simple Markov chain (such as the "Jiang-Walrand theorem" for reversible chains, or coupling from the past to perfectly generate a random variable). Perhaps that is not the case here. However, if your class gives solutions and there is some more natural and clever way to solve this problem, could you update this post and/or send me a comment about it?2017-02-10

1 Answers 1

2

Following the hints by @Ian, we present an answer, using the ergodic theroem and and the Monte Carlo Simulation. First of all, we easily see that there is a unique invariant distribution $\pi:$ $$\pi=\pi P\iff \pi(k)=\frac{1}{2}~\pi(k-1),~k\in \mathbb{N} \Rightarrow~ \pi(k)=\frac{1}{2^k}\pi(0),~k\in \mathbb{N}$$ and $\displaystyle \sum_{k=0}^{\infty}\pi(k)=1\iff \pi(0)=1/2$, so $\pi(k)=1/2^{k+1},~k\in \mathbb{N}_0.$ Since the given Markov chain is irreducible, has an invariant distribution and is aperiodic, by the ergodic theorem, with probability one:

$$\frac{1}{n}(f(X_1)+\ldots+f(X_n)) \rightarrow E^\pi(f)=\sum_{n=0}^{\infty}f(n)\pi(n).$$

By choosing $f(n)=2^{1-\sqrt{n}},~n\in \mathbb{N}_0$, we get with probability one:

$$\lim\frac{1}{n}(f(X_1)+\ldots+f(X_n)) =\sum_{n=0}^{\infty}\frac{1}{2^{n+\sqrt{n}}}.$$

For the numerical part, we can use the Monte Carlo Simulation method, by taking the mean of a large sample enough of $f(X_n),~n\geq 0,$ to approximate the above series.