3
$\begingroup$

Let $f:[0,1]\to[0,1]$ be a smooth, convex (downward) function satisfying $$ f(0)=f(1)=1,\quad \lim_{x\to 0}f'(x)=-\infty,\quad \lim_{x\to 1}f'(x)=+\infty. $$

I am confident to be able to argue that $f$ has exactly two fixed points in $[0,1]$ (one of them being $1$, of course.)

I would like to show that for any starting value $x\in (0,1)$, the sequence of function iterates $f(x), f(f(x)),\ldots$ converges to the fixed point which is not $1$.

I know from the convexity of $f$ that there exist $0 such that $f'(x_\pm)=\pm1$ and that $f$ on the interval $(x_-,x_+)$ is non-expansive.

I was thinking to try and argue that for any starting value the iterates $f^i(x)$ would eventually lie in $(x_-,x_+)$ and to then apply Banach's fixed point theorem.

My questions are:

  • Is it clear that the fixed point lies in the interval $(x_-,x_+)$? (I doubt it)
  • In order to apply Banach's fixed-point theorem, would I have to show that $f((x_-,x_+))\subset (x_-,x_+)$?
  • Is there a different approach that would guarantee convergence of the function iterates without checking additional conditions?

Thank you.


Edit:

Thanks to the efforts of richard and froggie it now seems that convergence of the iterates cannot be guaranteed under the conditions specified above.

I would therefore like to add the following assumptions: ($p$ denotes the fixed point which is not $1$)

  • $-1.
  • If $c=\min_x f(x)$, then $-1.

I think that with these additional assumptions it should be possible to prove convergence of the function iterates from every starting point.

  • 0
    Do functions with these properties exist?2012-12-16
  • 0
    Yes, the maybe easiest example would be $f(x)=2-\sqrt{x}-\sqrt{1-x}$.2012-12-16
  • 0
    Oops, I misread your question. Thanks for the example.2012-12-16
  • 1
    Do you assume that $f$ is differentiable on $(0,1)$?2012-12-16
  • 0
    Yes. In fact, I'm happy to assume the existence of as many derivatives as necessary. I added smoothness to the original question.2012-12-16
  • 0
    I think to deduce that $f$ has exactly two fixed points, only convexity is sufficient, but then your notation $f'(x)$ should be replaced by something else, say left/right derivative.2012-12-16
  • 0
    I agree. It might also be the case that differentiability is not required for convergence of the fixed-point iteration, but I do not see how to settle this question either way.2012-12-16

1 Answers 1

1

Since $f(1)=1$ and $\lim_{x\to 1}f'(x)=+\infty$, it is easy to see that there exists $a\in(0,1)$, such that $f(a). Define $g(x)=f(x)-x$ on $[0,1]$. Since $g(0)=1>0$ and $g(a)<0$, there exists $p\in[0,a]$, such that $g(p)=0$, i.e. $f(p)=p$. Since $g(p)=g(1)=0$ and $g$ is convex on $[p,1]$, either $g\equiv 0$ on $[p,1]$ or $g(x)<0$ on $(p,1)$. The former case cannot happen, because $\lim_{x\to 1}g'(x)=+\infty$. Therefore, $f$ has a unique fixed point $p$ in $[0,1)$.

Unfortunately, it could happen that $f'(p)<-1$. In this situation, the iteration of $f$ cannot converge to $p$.

When $-1, note that the iteration of $f$ on $(p-\delta,p+\delta)$ converges to $p$ for some $\delta>0$. Then we can define $I=(l,r)$ to be the maximal interval containing $p$ such that the iteration of $f$ on $I$ converges to $p$. By definition, $f(I)\subset I$. Since $I$ is maximal, $f(l),f(r)\notin I$, i.e. $f(l),f(r)\in\{l,r\}$. Then there are two cases: $l=0$ and $r=1$ or $f(l)=r$ and $f(r)=l$. For the latter case, by the maximality of $I$, we can conclude that $f'(r)< 0$, and hence $f'(p)<0$. Moreover, due to $f(I)\subset I$, we know that $f'(l)f'(r)\ge 1$.

  • 0
    Thank you for your answer, Richard. Would the condition $-1 be sufficient for the iteration of $f$ to converge to $p$ for any starting value? In the notation of my original question this would mean $x_-.2012-12-16
  • 0
    @Eckhard: I have updated my answer for $-1. I avoid to use your notations $x_\pm$, because the point $x$ for $f'(x)=\pm 1$ may not be unique.2012-12-16
  • 0
    Thanks again. I don't quite see yet why the maximal interval $I$ must be open, but otherwise the argument seems compelling.2012-12-16
  • 0
    @Eckhard: It is because if $f^n(x)\to p$, then there exists a neighborhood of $x$ converging to $p$ under the iteration of $f$.2012-12-16
  • 0
    @richard: Do you have an example where $f'(p)<-1$? I don't think this can happen, since some open set of points (those near $1$) should attract to $p$.2012-12-16
  • 0
    @froggie: You may first construct some convex $f_1:[0,\frac{1}{2}]\to[\frac{1}{3},1]$, smooth on $(0,\frac{1}{2}]$, such that $f_1(0)=1$, $f_1(\frac{1}{2})=(\frac{1}{3})$ and $f_1'<-1$. Then glue it with another $f_2$ defined on $[\frac{1}{2},1]$ to get $f$.2012-12-16
  • 0
    @Eckhard: Sorry, there was a mistake in my former argument. Now I think that if $f'(p)<0$, the convergence of the iteration of $f$ depends on the starting value.2012-12-16
  • 0
    @richard: your answer is still very useful. I updated my question to include an additional assumption on $f$ which I think will allow a proof of convergence from all starting values along the lines that you used in your answer.2012-12-16
  • 1
    @Eckhard: I agree with you. Moreover, I think only $f'(c)\ge -1$ is sufficient to exclude the situation $f(l)=r,f(r)=l$. The reason is as follows. Letting $f(x)=c$, then $f'(x)=0$ and $f$ is decreasing on $[0,x]$. Therefore, $f'(x)>f'(r)\Rightarrow r\le x\Rightarrow l=f(r)\ge f(x)=c\Rightarrow f'(c)\le f'(l)<-1$.2012-12-17
  • 0
    @richard: Let's assume we made the additional assumption that $f$ was symmetric around $1/2$. Then the maximal interval $(l,r)$ would be symmetric around $1/2$ as well, and, in particular, $f(l)=f(r)$. Thus, if the second of your two cases ($f(l)=r$, $f(r)=l$) was true, then $l$ would equal $r$, a contradiction to the stability of $p$. Hence, the first alternative must be true, and so $l=0$, $r=1$. So, symmetry seems to allow us to get rid of the assumption that $-1. (And in fact the assumption that $\min_x f(x)>0$.)2012-12-17
  • 0
    @Eckhard: If $f$ is symmetric about $1/2$, without assuming $f'(c)\ge -1$, then either $r=1$ or $r<1/2$.2012-12-17
  • 0
    @richard. Right, I didn't think of that. To remedy, I suggest adding the assumption that $c<1/2$ and $f(c)<1/2$. Then the even iterates from $x=1/2$, $f^{2i}(1/2)$ would form a decreasing sequence, and the odd iterates $f^{2i+1}(1/2)$ would form an increasing sequence. They might still get trapped in a circle, I think.2012-12-17
  • 0
    I now believe that trapping cannot happen. Assume that $f(c)<1/2$ such that $f^{2i}(1/2)$ is a decreasing sequence, bounded from below, hence convergent with limit $y$. This $y$ is a fixed point of $h=f\circ f$. Since $h'(x)=f'(f(x))f'(x)$ it follows that $h$ has minima at $x\in\{f^{-1}(1/2)\}$ and a maximum at $x=1/2$. The condition $f(c)<1/2$ guarantees that $h(1/2)<1/2$, which implies that $h$ has only one fixed-point in (0,1). Since $p$ is a fixed point of $h$, it follows that $y=p$.2012-12-17
  • 0
    @Eckhard: Your condition $f(c)<1/2$ means that for $J=[c,\frac{1}{2}]$, $f(J)\subset J$, and hence $f$ has a fixed point in $J$, which must be $p$.2012-12-17
  • 0
    @richard: Is it also sufficient to conclude that the iterates of $f$ converge to $p$ for starting values in $J$, as I tried to argue in my last comment?2012-12-17
  • 0
    @Eckhard: Yes, because the sequence of closed intervals $J_n=f^n(J)$ is decreasing, it must converges to some (degenerate) interval $J_\infty$ with $f(J_\infty)=J_\infty$. Then you may show that $J_\infty=\{p\}$.2012-12-17