4
$\begingroup$

Inspired somewhat by this problem, I've been investigating the behavior under iteration of the following discrete random process:

Given $n\in\mathbb{N}$, choose an integer from $\{0,1,\ldots,n\}$ uniformly at random, and multiply $n$ by it.

The process a.s. either reaches its fixed point, $0$, or diverges to infinity. (The only other possibility, that $n$ remains forever at some non-zero value, clearly has zero probability.) When it diverges to infinity, it does so quickly: its average value grows super-exponentially with the number of steps. I'm interested in the probability of the process reaching $0$, and in the cases where it does reach $0$, the largest value it attained.

Let $p(n)$ be the probability of reaching $0$ starting from $n$. Then we have $ p(n) = \frac{1}{n+1}\left(1 + \sum_{k=1}^n p(kn)\right), $ or $ p(n) = \frac{1}{n} + \frac{1}{n}\sum_{k=2}^n p(kn). $

We have $p(n) \ge 1/n$ immediately; feeding this back in gives an improved lower bound of $ p(n) \ge \frac{1}{n}\left(1 + \sum_{k=2}^{n}\frac{1}{kn}\right) = \frac{1}{n}+\frac{1}{n^2}\left(H_{n}-1\right), $ where $H_n$ is the $n$-th harmonic number. The lower bound can be improved by repeating this process; can a useful asymptotic expansion for $p(n)$, or even a closed form, be derived?

Similarly, let $q(n)$ be the expected value immediately preceding $0$ among terminating sequences that start from $n$. It is clear from the definition that $q(n)\ge n$, and numerical simulation suggests that $q(n) \sim K n$ for some constant $K$. Can $q(n)$ be given an asymptotic expansion? What is the value of the constant K?

  • 0
    @RossMillikan: To be clear, the constant I'm wondering about is the limit of $q(n)/n$, where $q(n)$ is the expected value of the last number you reach before $0$, conditioned on your actually reaching $0$.2012-10-06

1 Answers 1

0

I wrote the following comment (now corrected for having been $1-p(n)$):

I would guess for large $n$, you almost never fail unless it is on the first try, so would guess that $p(n)\sim \frac 1n$ and therefore $K=1$. This is because for large n you almost surely multiply by a large number. But "almost" here is not (quite) "with probability 1"

If we take it too seriously and try to iterate, we would say $p(kn)=\frac 1{kn}$. Putting that into your first equation we have $p(n)=\frac 1{n+1}\left(1+\sum_{k=1}^n \frac 1{kn}\right)=\frac 1{n+1}+\frac{H_n}{n(n+1)}\approx \frac 1{n+1}(1+\frac {\ln n}{n})$ which looks to be converging rapidly for large $n$. This is still wild handwaving, but it might help.

In this picture, the probability that we never get greater than $n$ is simply the chance that we hit some number of $1$'s (maybe none) and then a zero. This is $\sum_{i=1}^\infty \frac 1{n+1}=\frac 1n$. The chance that we get greater than $n$ and then hit zero is all the rest, which is $\frac 1n-\frac 1{n+1}(1+\frac {\ln n}{n})=\frac {1+\ln n}{n(n+1)}$. The chance that we hit $2n$ and no more is the chance that we hit some number of 1's, a 2, some more 1's, and a zero. This is $(\sum_{i=0}^\infty\frac 1{n+1})\frac 1{n+1}(\sum_{i=0}^\infty\frac 1{2n+1})\frac 1{2n+1}=\frac 1{n(n+1)2n(2n+1)}$ which is already $n^{-4}$, so I think $\lim_{n \to \infty} \frac {q(n)}n=1$