I have been struggling with Computational Methods and am having a hard time finding good examples/resources online to follow through with.
So Previously, we used MATLAB to calculate the errors of a given algorithm. We obtained errors much larger than machine epsilon. Now we want to find which single operation in the algorithm caused the large error.
First Example:
$y= 3 - \sqrt{9-x^2}$ for $x = \frac{1}{3} * 10^{-5}$
I broke it up into:
$y_1 = x^2$
$y_2 = 9 - y_1$
$y_3 = \sqrt{y_2}$
$y_4 = 3 - y_3$
I then am using: $ c_f = \frac{f^1(x)x}{f(x)}$ to determine the condition number
Respectively, I got:
$c_{f1} = \frac{x(2x)}{x^2} = 2$
$c_{f2} = \frac{y_{1}(-1)}{9-y_1} = 1.234 * 10^{-12} $
$c_{f3} = \frac{y_2}{2y_2} = \frac{1}{2}$
$c_{f4} = ~ 0 $
Would I then conclude that the $x^2$ operation causes the largest error in the algorithm?
Could someone please explain if this is correct or point me in the right direction? Thank You