On each iteration of the bisection method the error is halved. So we gain one binary digit of precision on each iteration. I want to find how many decimal digits of precision are gained. So does this look alright -
$$E_{k+1} = \frac{1}{2}E_k = (\frac{1}{10})^xE_k$$
$$\frac{1}{2} = \frac{1}{10^x}$$
$$2= 10^x$$
$$x = \log_{10} 2$$
$$x = 0.30103$$
So $0.30103$ decimal digits of precision are gained on each step?