Let $s$ be a state in the Markov chain. An excursion from $s$ is a sample from the Markov chain that starts in $s$ and then follows the transition probabilities as usual until it returns to $s$. Let's denote by $\tau_s$ the expected number of steps in an excursion from $s$.
Then in general it's true that for any state $x$, $\pi_x \cdot \tau_s$ is the expected number of visits to state $x$ in an excursion from $s$. We say that the number of visits to state $s$ itself is always $1$ in an excursion, so $\pi_s \cdot \tau_s = 1$.
In this problem, if we let $s = (-1,-1)$, then excursions from $s$ are easy to understand. The expected number of visits to state $(0,0)$ in an excursion is $q$, because we visit state $(0,0)$ only once (if at all) and we do so with probability $q$: the probability that we don't return to $(-1,-1)$ immediately. The expected number of visits to state $(i,j)$ is, similarly, the probability that we ever see the state $(i,j)$, since we can never loop around without returning to $(-1,-1)$. That probability is $$q \cdot p^i \cdot \frac{W-j}{W}$$ because:
- The probability is $q \cdot p^i$ that we reach the $i^{\text{th}}$ "column" of the Markov chain before looping around;
- The probability is $\frac{W-j}{W}$ that when we enter the $i^{\text{th}}$ column, we do it in one of the states $(i,j), (i,j+1), \dots, (i,W-1)$ from which state $(i,j)$ will eventually be reached.
As a result,
$$
\pi_{i,j} \cdot \tau_{-1,-1} = q \cdot p^i \cdot \frac{W-j}{W} \implies p_{i,j} = \frac{1}{\tau_{-1,-1}} \cdot q \cdot p^i \cdot \frac{W-j}{W}.
$$
Since $\frac1{\tau_{-1,-1}}$ is just $\pi_{-1,-1}$, this gives the formula we wanted.
Note that there is a typo (both in the question, and in the original paper): for state $(i,j)$ with $j \ne 0$, the limiting probability should still have a factor of $p^i$.