I have a question that asks me to get the solution to the equation $x+\arcsin(x)={\pi\over 2}$ by using a calculator. (Repeatedly pressing cos?) Then it asks to justify the accuracy of the answer. What does this mean? What am I supposed to do? Please help!
Accuracy of solution?
-
0Do you know about contractions? I guess, accuracy means the upper bound on the error of approximation – 2011-12-09
-
0Thanks, @Ilya. How are they relevant? And by contractions do you mean the Banach fixed point theorem? I know how to show that there is one unique solution in (1/2,1). But what does this have to do with accuracy? – 2011-12-09
-
0since you know that there is a unique solution in $[-1,1]$ you can write it as $$\arcsin x = \frac\pi2-x$$ and apply $\sin$ to both sides. Thereafter you can apply Banach fixed point theorem. Don't you know the accuracy of the latter method? If no, I will tell you. – 2011-12-09
-
0[As for the solution itself...](http://mathworld.wolfram.com/DottieNumber.html) – 2011-12-09
-
0Thanks, @Illya. I don't know the accuracy of applying the Banach fixed point theorem... – 2011-12-09
-
0Two thoughts immediately come to mind: Newton's method and attractive fixed points. – 2011-12-09
-
0@J.M.: In fact the story is probably what I am meant to do-- keep pressing cos... like Dottie :) how does accuracy work in this case? – 2011-12-09
-
0@MichaelHardy: Thanks, sorry for being misleading, I am quite low tech, we are just supposed to use the calculator on our comps -- not write our own programs! Edited, my bad. – 2011-12-09
-
0I've put it as an answer – 2011-12-09
2 Answers
For me it seem to be an exercise on the Contraction Mapping Theorem: rewrite the equation as $$ x = \pi/2-\arcsin x $$ so rhs is defined only on $[-1,1]$ and takes values on $[0,\pi]$. By monotone arguments we obtain that there is only one solution of the equation. Now, rewrite it as $$ \arcsin x = \frac\pi2-x $$ and apply $\sin$ to both sides: $$ x = \cos x \quad(1) $$ Geometrically it means that you reflected the graph of these functions with respect to $y=x$. Now we can apply Contraction Mapping Theorem to solve $(1)$ on $[-1,1]$. You can show using Lipschitz continuity of $\cos$ that $$ |\cos x' - \cos x''|\leq \alpha|x'-x''| $$ where $\alpha = \sin1<1$. Now, we put $x_0 =0$ and construct $x_{n+1} = \cos x_n$ which converges to the solution $x^* = \lim\limits_{n\to\infty}x_n$. We only need to find bounds: $$ |x^*-x_n| \leq \sum\limits_{k=n}^\infty|x_{k+1}-x_k| \leq \sum\limits_{k=n}^\infty\alpha^k = \frac{\alpha^n}{1-\alpha} $$ where we used that $|x_{k+1}-x_k|\leq \alpha|x_k-x_{k-1}|\leq\dots\leq\alpha^k|x_1-x_0|=\alpha^k$. By the way, $\alpha\leq 0.85$
-
0Suppose one has a function $f: \mathbb{R} \rightarrow \mathbb{R}$ (*not* known to be a contraction mapping, but for which a solution of $x = f(x)$ *is* known to exist in an interval $I$), a starting value $x_0$, and a number $n$, such that $\forall k \ge n$ the *computed* iterates $f^k(x_0)$ are all the same value in $I$ to within some fixed precision (say fifteen digits). Even though $f$ is not known to be a contraction mapping, what can be said of such an apparent approximate solution? Is there a nice example in which such an apparent solution is incorrect? – 2011-12-09
-
0Take $f(0)=1/2$, $f(1)=1$ and $f(x)=0$ for all $0
. – 2011-12-09 -
0@dls: I should have specified the solution to be *unique*, the interval to be *closed*, and the function to be *continuous*. The idea is that proving it's a contraction mapping should be unnecessary in many well-behaved cases. – 2011-12-09
-
0@r.e.s.: A function with a multiple root is a good example. Try $f(x)=x^2$. You can be within $1E-16$ of zero for $-1E-1
, so the accuracy of the root is much worse than the machine epsilon. – 2011-12-10
If you wrote a program using a method you know, you can use the analysis of the method to give an answer. For example, if you use Newton-Raphson, the error is squared each step. It is not too bad an approximation then to say the error is about the square of the amount you moved in the step before you converged.
If you still get a non-zero result for $x+\arcsin(x)-{\pi\over 2}$ at your converged $x$, you can use the derivative (essentially taking one more step of Newton-Raphson) to estimate the error. I get an $x$ of about $0.739$. If you are using single precision, you have about $24$ bits of precision. You can't add anything less than $0.739*2^{-24}\approx 4.4 E-8$ to it an see any difference. If your next step should be $10^{-10}$, that is a reasonable approximation to your error.
Neither of these is rigorous, in that they are not an absolute upper bound, but they will be very close. Your function is nicely behaved near the root-no other roots nearby.
-
0Thanks, Ross. Actually to be precise, we are told to use the calculator on our computers (sorry, I forgot to say that). or equivalently, we can use our calculators directly. I think the trick is to keep pressing cos. But how do I justify ts accuracy? – 2011-12-09
-
0Can you figure out what the least significant change in $x$ is? This is the $4.4E-8$ I referred to for single precision. You can't be much more accurate than that. One way is to take your converged value and do $(x+\epsilon)-x$ and see where you get $0$. In this case you are probably accurate to that assuming your trig functions are at least that good (not necessarily true). Then if you display the fact that $f(x + \epsilon) \ne 0$ you can claim that you are good to 1 LSB. – 2011-12-09