Following Joriki's suggestion:
When looking at convergence rates, a common estimate of error used is $ e_{n+1} \leq K e_n^p$, with $p \geq 1$, $K >0$ and $e_n\geq 0$. It is simpler to split the approach into two cases, $p=1$ and $p >1$.
$p=1$ is referred to as linear convergence, and it is straightforward to see that $e_n \leq K^n e_0$. $K$ is referred to as the rate, and is only meaningful in this context if $K<1$. Taking log of the bound gives $n \log K + \log e_0$, so assuming that $e_{n+1} \approx K e_n$, plotting $\log e_n $ vs. $n$ will be roughly a straight line which will give an estimate of $K$. (An estimate of $K$ can also be found by just computing $\frac{e_{n+1}}{e_n}$.)
Assuming that $e_n \approx |\text{dx}|$ above, plotting $\log e_n $ vs. $n$ gives the following for Method 1.

This is about as straight as one can hope for, with slope $-1.3392$ which gives $K \approx e^{-1.3392} \approx 0.26$. Hence Method 1 is linear with a rate of about a quarter.
Plotting $\log e_n $ vs. $n$ for Method 2 gives:

so this doesn't fit a linear (affine) model nicely. In fact, it converges considerably faster. This lead us to the second case:
$p>1$ is referred to as superlinear convergence. In this case, the rate $K$ is of much less consequence (assuming $e_n \to 0$, of course), and $p$ is referred to as the order. Superlinear convergence is dramatically faster than linear convergence, and is very desirable.
To get a handle on superlinear convergence, we multiply the error estimate by $K^{\frac{1}{p-1}}$ to get $K^{\frac{1}{p-1}} e_{n+1} \leq K^{\frac{1}{p-1}} K e_n^p = (K^{\frac{1}{p-1}} e_n)^p$. This gives $K^{\frac{1}{p-1}} e_n \leq (K^{\frac{1}{p-1}} e_0)^{p^k}$. This illustrates how quickly the error is reduced and, of course, is only useful if $K^{\frac{1}{p-1}} e_0 < 1$.
One way of estimating $p$ is to again assume $e_{n+1} \approx K e_n^p$ and take logs, which gives $\log e_{n+1} \approx \log K + p \log e_n$. If we plot $\log e_{n+1}$ vs. $\log e_n$, we should be able to estimate $p$ from the slope. So, plotting $\log e_{n+1}$ vs. $\log e_n$ for Method 2 gives:

which is pretty straight. A quick slope estimate using the end points gives $ p \approx 1.993 \approx 2$. Hence this is likely to be quadratically convergent.
Note that from a numerical standpoint, the exponent of the error (as in exponent and mantissa of the decimal representation) is a quick proxy for $\log e_n$. Hence for linear convergence, we expect the exponent to decrease linearly (well, affinely) with iteration. For quadratic convergence, we expect the exponent to double with each iteration. A quick glance will indeed confirm this to be the case above.