5
$\begingroup$

I would appreciate a definition clarification.

if a numerical method is "unstable", does it mean that if we introduce a small random error in one of the steps, the error would be magnified greatly after further steps? is this true for all unstable algorithms or are there some where the random error is never made significant, say wrt the error of the method itself?

  • 0
    More or less, yes. A tiny error committed in the course of the algorithm's preceedings gets easily magnified as the algorithm marches on. But one must always keep separate the notion of an "ill-conditioned problem" and an "unstable algorithm".2011-12-31

2 Answers 2

3

Unstable means that if you change your data points by a small value then the result of your computation changes by a big value.

On the other hand what happens during the steps is called rounding errors and precision errors.

Compare to Burden/Faires "Numerical Analysis" page 33:

"...One criterion we will impose on an algorithm whenever possible is that small changes in the initial data produce correspondingly small changes in the final results. An algorithm that satisfies this property is called stable; otherwise it is unstable...."

"...As a consequence, an algorithm that exhibits linear growth of error is stable, whereas an algorithm exhibiting exponential error growth is unstable...."


Edit

What I wrote above is under the assumption that the problem you are trying to compute is well-conditioned. If it's not then whether your algorithm is stable or not doesn't matter because if you have a tiny error in your input data then the output will be greatly perturbed.

Hope this helps.

  • 0
    @J.M. Nice, thanks : ) And thanks for the vote, I assume it was you.2011-12-31
0

We say a method is stable when it is capable of controlling errors introduced in each computation. Stability allows the method to converge to a certain solution. Here's a simple example: \begin{equation} u_t = -u_x \end{equation} Suppose a set of equally distanced nodes on the x-axis in 1D. We assume $U_i$ denotes approximate values of the function $u(x)$ at $i^{th}$ node. We use Forward Euler in time and Centered Differences in space: \begin{equation} U_i^{n+1} = U_i^n - \frac{\Delta t}{\Delta x} (U^n_{i+1} - U^n_{i-1}) \end{equation} Now assume at some iteration (maybe even the initial condition) the numerical solution has an oscillatory form. For simplicity assume, $U_i = (-1)^i*x_i$. Now you can clearly see that no matter what values of $\Delta t, \Delta x$ you choose, negative values of the function will get smaller and positive values will grow larger which eventually leads to unacceptable results.

This is only a simple explanation. For further reading I advise you to take a look at Computational Fluid Dynamics Vol. I, by Hoffman and Chiang. As you can see, this doesn't necessarily happen with random errors introduced intentionally, It can happen whenever there is some errors or oscillations in the solution.