Background. We can play a game in which we can put one dollar and get out $X$ dollars, where $X$ is 2 dollars with probability $p>1/2$, or zero dollars with probability $1-p$. We also assume that different plays of this game are independent. Note that $E[X] > 1$, meaning that we have an advantage.
We begin with one dollar and start playing this game, such that we play with a fraction $f\in[0,1]$ of our current fortune each time we play.
Thus—and here we let $X,X_1,X_2,\ldots$ denote (independent) random variables having the distribution of our plays as described above—our initial fortune is $F_0 = 1$, our fortune after 1 play is $F_1 = 1-f+fX_1 = 1+f(X_1-1)$, and in general, it is easily calculated that our fortune after $n$ plays is $F_n = \prod_{i=1}^n(1+f(X_i-1)).$
Since the plays are independent, our expected fortune after $n$ plays is $E[F_n] = E\bigg[ \prod_{i=1}^n(1+f(X_i-1)) \bigg] = (1+f(E[X]-1))^n.$ Since $E[X] > 1$, it is clear that $E[F_n] \to \infty$ no matter what $f$ we choose.
However, for any given $n$ we see that $E[F_n]$ is largest for $f=1$. Does this mean we should choose $f=1$? This is probably a bad idea, because in this case, the probability of going broke after $n$ or fewer plays is $1-p^n$, which approaches one as $n\to\infty$. In other words, if we choose $f=1$, we will eventually go broke for sure. (For a further explanation of this, feel free to refer to my previous question, Resolving a paradox concerning an expected value.)
So what strategy should we take to maximize our long-term performance? Some authors have suggested the approach of maximizing the “exponential rate of growth.” Let me explain.
Imagine that $F_n = e^{nG_n}$. Here, $G_n$ is the “exponential rate of growth” of $F_n$. We can write $G_n = \frac{1}{n}\log(F_n) = \frac{1}{n}\log\bigg( \prod_{i=1}^n(1+f(X_i-1)) \bigg) = \frac{1}{n} \sum_{i=1}^n \log(1+f(X_i-1)).$ Using the law of large numbers, we find that $G_n \to E[\log(1+f(X-1))] \quad \text{almost surely}.$ Now, $G := E[\log(1+f(X-1))]$ is what some authors refer to as the “exponential rate of growth” that we should aim at maximizing. Indeed, $G$ seems to be the “exponential rate of growth” at the “infinity,” in some sense.
In our present case, we find that $G = \log(1+f)p + \log(1-f)(1-p),$ which—interestingly—is maximized at $f=2p-1$. According to this, we have the largest exponential rate of growth if we choose $f=2p-1$, and this is the value for $f$ at which some authors suggest gamblers place their bets.
Question 1. Why should I care about this? Can you give me some intuitive explanation of why my goal should be to maximize the exponential rate of growth, as opposed to maximizing $E[F_n]$ (i.e. to choose $f=1$), or choosing $f=0.99$ to keep $E[F_n]$ high while avoiding that almost sure bankruptcy? I don’t have a natural feel for what to do and why, in order to get as much growth as possible in the long run.
Question 2. How can it be possible that the exponential rate of growth is maximized at $f=2p-1$, yet $E[F_n]$ is always the highest for $f=1$? That doesn’t make much sense to me. Wouldn’t you expect $E[F_n]$ to eventually become the highest at $f=2p-1$? In other words, how can I we—in the “infinity”—have the greatest growth at $f=2p-1$ but the greatest expectation value at $f=1$?
Question 3. Here is a graph that shows $G$ as a function of $f$ for $p=2/3$.
The maximum is indeed at $2p-1=1/3$. However, something interesting happens around $f\approx 0.618$, when the curve goes below zero. Supposedly, this is the point beyond which we are almost sure to end up with a loss, i.e. end up with less than 1 dollar. Do you understand why this is? Furthermore, it still remains a mystery to me why the curve for $E[F_n]$ wouldn’t show similarly interesting features, instead of being a monotonically increasing function of $f$.