0
$\begingroup$

I have been struggling with Computational Methods and am having a hard time finding good examples/resources online to follow through with.

So Previously, we used MATLAB to calculate the errors of a given algorithm. We obtained errors much larger than machine epsilon. Now we want to find which single operation in the algorithm caused the large error.

First Example:

$y= 3 - \sqrt{9-x^2}$ for $x = \frac{1}{3} * 10^{-5}$

I broke it up into:

$y_1 = x^2$

$y_2 = 9 - y_1$

$y_3 = \sqrt{y_2}$

$y_4 = 3 - y_3$

I then am using: $ c_f = \frac{f^1(x)x}{f(x)}$ to determine the condition number

Respectively, I got:

$c_{f1} = \frac{x(2x)}{x^2} = 2$

$c_{f2} = \frac{y_{1}(-1)}{9-y_1} = 1.234 * 10^{-12} $

$c_{f3} = \frac{y_2}{2y_2} = \frac{1}{2}$

$c_{f4} = ~ 0 $

Would I then conclude that the $x^2$ operation causes the largest error in the algorithm?

Could someone please explain if this is correct or point me in the right direction? Thank You

  • 0
    How do you get to the constant in $c_{f2}$? Basically: When subtracting numbers of equal magnitude, you loose a lot significance.2017-02-13
  • 0
    @Laray Sorry, just updated it to include value for X to be used in calculation2017-02-13
  • 0
    @Laray Could you please explain your comment more? Should I be expecting $9 - y_1$ to be the operation with the largest error?2017-02-13

2 Answers 2

1

When subtracting numbers that differ only by few decimal places results in a catastrophic cancellation, this is a loss of significance. Ways to avoid this are covered in numerical stability, like using algebraic properties, taylor polynomial approximation, polynomial simplification etc

By the way I made some calculations with Mathematica to test the conditioning of your function:

$c_{f_1} = 2$

$c_{f_2} = -\frac{2}{9-x^2} = -2.469135802\cdot 10^{-12}$

$c_{f_3} = -\frac{x^2}{9-x^2} = -1.23456790\cdot 10^{-12}$

$c_{f_4} = \frac{x^2}{\sqrt{9-x^2}(3-\sqrt{9-x^2})} = 1.99999983$


EDIT: As you asked me to clarify where I got the conditioning results, then this is where they came from:

$c_{f_1} = \frac{x\cdot y_1'}{y_1} = \frac{x\cdot 2x}{x^2} = 2$

$c_{f_2} = \frac{x\cdot y_2'}{y_2} = \frac{x\cdot -2x}{9-x^2} = -\frac{2x^2}{9-x^2}$

$c_{f_3} = \frac{x\cdot y_3'}{y_3} = \frac{x\cdot -\frac{x}{\sqrt{9-x^2}}}{\sqrt{9-x^2}} = -\frac{x^2}{9-x^2}$

and $c_{f_4} = \frac{x\cdot f(x)'}{f(x)}$

where $x=\frac{1}{3}\cdot10^{-5}$

Then the instability appears in $c_{f_2}$ with the subtraction of $9-x^2$. Hope it helps.

  • 0
    Thank you for your answer. Please bare with me as I try to wrap my mind around what you are doing... Where did you get $c_{f2}$ from? I see that your $c_{f3}$ is the same as what I got for $c_{f2}$ so I'm not sure where your got that calculation from.2017-02-13
  • 0
    Also could you please explain what I should be looking for to determine which operating caused the large error? To my understanding, whichever operation has the largest $c_f$ causes the largest error, which would make it $c_{f1}$ or the square operation. Could you please explain your answer "Since both $c_{f2}$ and $c_{f3}$ are using the value $x^2$"2017-02-13
  • 0
    @Daven.Geno: I edited my answer to clarify my results. Hope it fits now.2017-02-13
  • 0
    Thank you for the edits. Sorry for the questions but I am still not fully understanding how you got your answer. for $c_{f2}$ how did you simplify $\frac{-2x^2}{9-x^2}$ to $\frac{-2}{9-x^2}$ ? How did the $x^2$ in the numerator get cancelled out? Also, could you please explain how you determined the instability appeared in $c_{f2}$? Besides your calculation of the condition numbers, I'm not sure how you progressed through the problem to the solution? Again, thank you.2017-02-13
  • 0
    sorry i had a typo see it now :D2017-02-13
  • 0
    I am just noticing that $c_{f2}$ is ~ $2 * c_{f3}$ so it looks like this could just be a typo?2017-02-13
  • 0
    Ah i see thank you. Could you please explain how to use these condition numbers to determine which operation causes the largest error2017-02-13
  • 0
    @Daven.Geno: Actually the largest condition number determines the loss of significance (ill-conditioned). What is happening here is that you are subtracting to $9$ a very small quantity that will yield a loss of significance due to fixed-arithmetic, of course in an ideal-arithmetic the calculation would be perfect. You can try to play around with the Taylor polynomial approximation $\frac{x^2}{6} +\frac{ x^4}{216} + \frac{x^6}{3888}$ so you will end up with different results comparing to evaluate $x$ in $f(x)$.2017-02-13
  • 0
    Sorry, I thought I was grasping the concept but it doesnt appear so. The largest condition number determines the loss of significance? How does it play a role in determining which operation caused the greatest error? I understand why the subtraction should cause a loss of significance, but I do not understand how to find/prove it mathematically. Since $c_{f1}$ is the largest condition number, and $y_1 = x^2 $ wouldn't it be the square that is causing the errors.2017-02-13
  • 0
    Well thing is that first condition number remais constant whatsoever, when I studied this subject I been taught to evaluate the whole function so my concern is that only $c_{f_4}$ makes sense for me. By the way, I find this problem to be a loss of significance problem not bad conditioned. Bad conditioned is when you modify slightly the input so the output varies a lot in its magnitude, for example see wilkinson polynomial.2017-02-13
  • 0
    Yes thanks! I just found a sample problem online (where the use $c_{f2}(f1)$ rather than $c_{f2}(x)$ and came to the conclusion that $3-y_4$ is the operation causing the error loss (finally a little progress). Now I am trying to reform the equation so that this loss does not occur (which I'm not sure exactly how to go about doing so)2017-02-13
  • 0
    One method is to use Taylor polynomial for approximating the result. In one of the comments you will find it up to 6th degree. Compare the output of x in Taylor's and f(x). The point using Taylor is that you get rid of the operations that cause the loss, but there are other interesting methods as well.2017-02-13
  • 0
    hm I am familiar with the Taylor polynomial approximation but I can't see exactly how you are applying it. I recall the theorem is $ f(x) = f(a) + f^{1}(a)(x-a) + ... + \frac{f^n(a)(x-a)}{n!} $ Could you please explain what input you used to give the above order 6 apprpoximation?2017-02-13
  • 0
    Since your $x$ is almost $0$, I've evaluated Taylor's at neighborhood of $a=0$ . In Mathematica Series[3 - Sqrt[9 - x^2], {x, 0, 6}] (if you don't want to lose time doing by "hand"). Then this polynomial gives a good approximation of $f(x)$ where $x$ is close to $0$.2017-02-13
1

By using floating point arithmetics, you represent every value by $1.abc(...)\cdot 2^{qwer}$ (I use letters as placeholders for single bits here)

If you subtract similar numbers (like $1.abcdef \cdot 2^q$ and $1.abcdeg\cdot 2^q$) you loose a lot of precision. You can assume, that both inputs have a relative error of $2^{-6}$ which is the best you can get with that mantissa. But if you subtract these numbers, you get to a result of $0.(f-g)\cdot ^{q-5}$ This has a relative error of $2^{-1}$, which is horrifying large. Try to avoid subtracting numbers of similar size!

In your example I think, there is not that much of a problem but let me give you a suggestion, since you already work with MatLab. There ist IntLab from my professor, that automatically calculates upper and lower bounds for the result of every single arithmetic expression. The cost is really high, but you could use some of the Papers metioned on that page.