2
$\begingroup$

I am studying Runge Kutta methods using the videos here - http://mathforcollege.com/nm/videos/youtube/08ode/rungekutta2nd/rungekutta2nd_08ode_derivationone.html.

$y_{i+1} = y_i + h(ak_1 + ak_2)h$

where

$k_1 = f(x_i, y_i)$

$k_2 = f(x_i + p_1h, y_i + q_{11}k_1h)$

and we have the following stipulations -

$a_1 + a_2 = 1$

$ap_1 = \frac{1}{2}$

$aq_{11} = \frac{1}{2}$

Does anyone know the reasons for the stipulations? I think I saw it mentioned somewhere that $a_1 + a_2 = 1$ means the method is 'consistent', whatever that is?

1 Answers 1

2

(Note, what you call $a_i$, I call $b_i$ to keep with standard single-step nomenclature).

This condition is sometimes known as the quadrature condition. Essentially, we can write down a table of constants for a general single-step ODE integrator in the following form:

$\begin{array}{c|cccc} c_1 & a_{11} & a_{12} & \cdots & a_{1n} \\ c_2 & a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ c_n & a_{n1} & a_{n2} & \cdots & a_{nn} \\ \hline & b_1 & b_2 & \cdots & b_n \\ \end{array}$

The matrix $A$ is the matrix of $a_{ij}$ elements, and the vector $\mathbf{b}$ is the column vector of $b_i$ elements.

To ensure $0$-stability, and therefore convergence (via a more involved theorem), we must have

$\mathbf{b}^TA^{k-1}\mathbf{1} = \frac{1}{k!},\ k = 1,2,\ldots, p$

for a method of order $p$.

Having this condition allows us to set an estimate of local truncation error to

$d_n \approx h^p\left(\mathbf{b}^TA^p\mathbf{1}-\frac{1}{(p+1)!}\right).$

(Much of this was refreshed from the fantastic book Computer Methods for Ordinary Differential Equations by Ascher and Petzold).

Proof of this is a little bit more involved, certainly more than I care to put in an MSE answer, but the short version is that $\sum_i b_i = 1$ is a necessary condition to achieve $0$-stability.

It is also worth mentioning that in the table above, for an explicit method, only the sub-diagonal entries are non-zero. So, for the standard RK4 method, the table looks like this:

$\begin{array}{c|cccc} 0 & 0 & 0 & 0 & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0\\ \frac{1}{2} & 0 & \frac{1}{2} & 0 & 0\\ 1 & 0 & 0 & 1 & 0\\ \hline & \frac{1}{6} & \frac{1}{3} & \frac{1}{3} & \frac{1}{6} \\ \end{array}$