I would go with the book's answer, rather than yours. Here's why:
To rate the accuracy of a weather forecast, we can indeed choose between $P(rain|rainyforecast)$ and $P(rainyforecast|rain)$, but I would argue that the latter is better, because it is more stable over time and place.
That is: if I have a weather forecast predictor that has been calibrated during different times and places, then the $P(rainyforecast|rain)$ is probably fairly stable between different times and places: when the signs of rain are there, the forecasting algorithm is able to pick up on it with a certain probability, and this probability is not going to change much from time to time and place to place.
On the other hand, a measure like $P(rain|rainyforecast)$ is probably going to change quite a bit from time and time and, I would think, especially from place to place. That is: if I am living in a place where there is a lot more rain than average, then the higher 'base rate' probability (or 'prior') $P(rain)$ will increase $P(rain|rainyforecast)$ as well .. meaning that this is not a stable measure that I can take from place to place.
For the same reason, the 'accuracy rating' of tests for diseases is typically defined as $P(P|D)$ (with $P$: test positive and $D$: has disease) rather than $P(D|P)$, for as the base disease rate ($P(D)$ goes up and down, $P(D|P)$ will go up and down (that is, if $P(D)$ gets smaller, the probability of getting false positives goes up, meaning that $P(D|P)$ will go down), but presumably the ability with which the test is able to pick up on symptoms will barely change over time and place, and thus $P(P|D)$ remains pretty much the same, and thus is the more 'stable' one that I can take with me through time and place.
So, with this interpretation of 'the accuracy of a rainy forecast = 0.8', we get:
$P(rainy forecast|rain)=0.8$ (let's make that $P(RF|R) =.8$)
But you need to know:
$P(\neg R|RF)$
OK, by Bayes' rule:
$$P(\neg R|RF) = 1- P(R|RF) = 1 - \frac{P(RF|R)*P(R)}{P(RF)}$$
We already know $P(RF|R)=0.8$
We also know that $P(R) = 0.1$ (the 'prior' probability of rain ... which in last 5 years was 0.1)
And:
$P(RF) = P(RF|R)*P(R) + (RF|\neg R)*P(\neg R)$
Here, $P(\neg R) = 1 - P(R) = 0.9$
And $P(RF|\neg R) = .... $ Hmm, that's not clear, is it: we know that when it doesn't rain, the forecast is 90% accurate, but that doesn't tell me how often it falsely predicts rain. E.g. when it snows, but it doesn't predict snow, it might predict rain, but it also it might predict it's sunny ... Huh: some information is missing ... or ... the book incorrectly assumes that when it fails to predict a non-rain event, it predicts rain (in which case $P(RF|\neg R) = 0.1$), or it assumes that the mis-prediction is equally distributed between all others (in which case $P(RF|\neg R)=\frac{1}{30}$)
but yeah, there is a real problem here:we don't know the rate with which rain is incorrectly predicted!