$1.$ For a realistic model, I think you are right to use Poisson. It might be modeled as binomial if
there were $n = 40$ lines per page and $p = 0.1$ of making a mistake on any one
line. That would give an average of 4 mistakes per page, and $Pois(\lambda = 4)$ is not 'much' different from $Binom(40, 0.1).$ (The main difference is that the binomial model does not envision more than one mistake per line.)
$2.$ Let $X$ be the number of pages that must be repeated out of 20. According to the Poisson model of (1), the probability of a repeat is $p = 0.3711,$ and
$X \sim Binom(20, p)$ with $E(X) = 7.4233.$
You want $P(X \le 2) = 0.0074.$
p = 1 - ppois(4, 4); p
## 0.3711631
20*p
## 7.423261
pbinom(2, 20, p)
## 0.007385354
$3.$ Here it seems you want $P(W > 100),$ where $W \sim Binom(1000, p),$ with the same $p$ as in (2). The exact binomial and (as you say) the normal approximation to binomial both give essentially $1$ as the answer. The expected number
of bad pages in 1000 is about 371 with a SD of about 15.3, so it is not
surprising that the actual number of bad pages will exceed 100.
1 - pbinom(100, 1000, p)
## 1
mu=1000*p; sg = sqrt(1000*p*(1-p)); mu; sg
## 371.1631
## 15.27747
1 - pnorm(100, mu, sg)
## 1
This guy needs to improve his typesetting to make fewer errors. And so do I,
so please proofread everything here.
The figure below shows the distributions and approximations discussed above.
(Bars for the Poisson distribution in the right-hand plot are too close together
to show individually.)
