5
$\begingroup$

I have a set of rather nasty, nonlinear difference equations roughly of the following form:

$ \frac{a^{(j)}_{i+1}(s)-a^{(j)}_i(s)}{\epsilon}=f^{(j)}(\lbrace a^{(j)}_i(s)\rbrace_{j=1}^m,\lbrace a^{(j)}\prime_i(s)\rbrace_{j=1}^m,\epsilon) $

I have one such equation for each $j$ from 1 to $m$. Here $a^{(j)}_i$ are functions of $s$ for each $i,j$, and primes denote derivatives with respect to $s$. The subscript $i$ is like a discrete time variable.

[I actually have a few problems that I can put in this form so I am curious what can be said at this level of generality.]

This set of equations propagate the initial data $\{a^{(j)}_0(s)\}_{j=1}^m$ so long as the $f^{(j)}$ remain well-defined (e.g. no dividing by zero or anything like that), and I can write down some set of inequalities on $\{a^{(j)}_i\}_{j=1}^m,\{a^{(j)}\prime_i\}_{j=1}^m,\epsilon$ so that this is true at any given step.

In the limit $\epsilon\rightarrow0$, I can expand the right hand sides in a Taylor series in powers of $\epsilon$ and I can formally write:

$ \frac{\partial a^{(j)}(x,s)}{\partial x}=f^{(j)}_0(\lbrace a^{(j)}(x,s)\rbrace_{j=1}^m,\lbrace\partial_sa^{(j)}(x,s)\rbrace_{j=1}^m)$

Where $f^{(j)}_0$ is the first terms of the Taylor series for $f^{(j)}$, respectively, and $a^{(j)}(x,s)$ is supposed to be like a continuum limit of $a^{(j)}_i(s)$ as the $i$ spacing gets small. The initial data for this is now a set of functions $\{a^{(j)}(0,s)\}_{j=1}^m$.

I am not an analyst at all so maybe this is either trivial or impossible, but

1) I would like to know whether or under what circumstances solutions of the difference equations converge to solutions of the differential equations, and how to show this convergence.

2) It turns out that when I take $\epsilon\rightarrow0$, the inequalities that guarantee solution to the difference equations are always satisfied - does this imply that the differential equations won't generate singularities as I evolve in $x$ as well?

I looked a bit in the literature but I mainly found articles going the opposite way, i.e. looking for good discretizations of PDEs, rather than showing that particular discretizations converge. Maybe I am missing something?

Feel free to recommend some books or articles or even "magic words" for google if this is all really basic.

  • 0
    Do there exist theorems which show that consistency + stability imply convergence for the type of problem I'm considering? Where might be a good place to look?2012-09-13

1 Answers 1

3

For consistency+stability implies convergence please refer to Lax-Richtmyer Equivalence Theorem.

The stability, intuitively speaking, is that the error of solving the numerical solution at $t_{i+1}$ time step $a_{i+1}$ from the known data $a_i$, $f(a_i)$, doesn't get accumulated through all $i$, such that you could get a control at the $n$-th step, ie your numerical scheme is not that sensitive to perturbations. THe stability depends on the scheme and also the equation itself. Mathematically speaking, is that if $a_{i+1} = L a_i$, you must have the boundedness of the operator $L$ in some norm within a certain region for $(s,t)$, in your case, there are $m$ $a_{i+1}$s but the idea is the same.

The most famous paper would be Dahlquist's paper on convergence and stability, but I suggest refer to some introductory book on finite difference method, I recommend this book by Randall LeVeque, the L-R equivalence principle is also on this book.

Since you mentioned the nonlinearity in your equation, based on your first equation, the right hand side only involves the know data at time $t_i$, even if your original problem is highly nonlinear, what you have is called an explicit scheme, and the difference equation is linear(not in $s$ variable though, but in the $t$ variable you are approximating the derivative with finite difference). Noted that explicit scheme is sometimes unstable for nonlinear equations due to $f$, I suggest use implicit Backward Euler with Newton's iteration for nonlinear equation.

For consistency, your second equation is consistency indeed if you let $\epsilon \to 0$, the formal definition is under some norm, true derivative $\partial_t a(t,s)$ at $t_i$ subtracting the approximation to derivative(finite difference at $t_i$: $\dfrac{a_{i+1} - a_i}{\epsilon}$) goes to zero when $\epsilon \to 0$.

  • 0
    Glad it helped.2012-09-19