1
$\begingroup$

Let $X = C^{k+2, \alpha}(S(T)),$ $Y = C^{k, \alpha}(S(T)),$ where $S(T) = S^1 \times [0,T]$. Don't think of $T$ as fixed, but varying. So these Banach spaces contains functions with different time intervals. Suppose there is a map $F:X \to Y$ with $F(u) = u_t - a(x,t,u,u_x,u_{xx})$ where $a(x,t,z,p,q)$ is smooth in its arguments. We want to show that there is a unique $u^*$ such that $F(u^*) = 0.$ To do this, we can show that the derivative at $u$ $DF(u)v = v_t - a_z(u)v - a_p(u)v_x - a_q(u)v_{xx}$ is invertible (or bijective or linear isomorphism) at a particular function $u$. It is invertible, and we also know that the inverse mappings $DF(u)^{-1}$ are varying continuously and are uniformly bounded (for bounded $u$) regardless of the time interval in the domain.

Now can someone please explain these points I don't understand:

If there is a $u^0 \in X$ such that $F(u^0)$ is small, then the inverse function theorem implies that for all small $s \in Y$, there exists a unique $u$ such that $F(u) = s$, and $u$ depends continuously on $s$.

Is that right? By "small", I guess the author means close to zero. My understanding is, if $F(u^0)$ is in a neighbourhood of zero, for all functions $s$ in that same neighbourhood of $0$, we can find a $u$ in some neighbourhood of $u^0$ such that $F(u) = s$. Is that correct? Why does $u$ depend continuously on $s$?

Now if $u^0 = a(x,t,0,0,0)t \in X$, provided $T$ is small enough, $F(u^0)$ is as close to $0$ as required. This is the point when we take the time interval $[0,T]$ to be short.

It is true that $F(u^0) \to 0$ as $t \to 0$. But this is pointwise convergence, don't we need convergence in the $Y$ norm? Also, how can we be sure that 0 in fact lies in the neighbourhood of $Y$ that becomes invertible?

  • 0
    @WillieWong Sorry for my oversight. I did mean the parabolic Holder space with my notation, not the usual space.2012-07-19

1 Answers 1

2
  1. Your interpretation is not quite right. Inverse function theorem (under your assumptions) states that for $u^0$, there exists a neighborhood $N^0 \ni F(u^0)$ such that $F$ is invertible on $N^0$ with continuous inverse (in fact continuously differentiable). By the uniform bound on $DF^{-1}$ you can take the size of $N^0$ to be uniform: in particular there exists a constant $\delta$ such that $N^0 \supseteq F(u_0) + B_\delta$. So if $\|F(u_0)\| < \delta$, $0\in N^0$ and there exists $\epsilon$ such that $B_\epsilon \subseteq N^0$, meaning that for $s\in B_\epsilon \subseteq N^0$ we have a continuous map $F^{-1}: B_\epsilon \to X$.

  2. Yes, you need convergence in the $Y$ norm. Note that (writing $a(x,t) = a(x,t,0,0,0)$) $F(ta(x,t)) = a(x,t) + ta_t(x,t) - a(x,t,ta, ta_x, ta_{xx})$ The first and third terms combine to give (using the differentiability of $a$) a quantity that is $O(t)$. This shows that in particular that all $x$ derivatives of $F(ta(x,t))$ are $O(t)$ and hence can be made as small as you want by making $t$ small.

    There is, however, a problem with the $t$ derivatives, if you take $F(ta(x,t))_t |_{t = 0}$ you get $ a_t(x,0) + a_t(x,0) - a_t(x,0) - a_u(x,0) a(x,0) - a_p(x,0) a_x(x,0) - a_q(x,0) a_{xx}(x,0) $ which is not in general 0. So whichever source you are quoting from is incomplete in its argument.

  • 1
    @TagWoh: Uniformity is the statement that the $\delta$ is$a$constant (in particular should _not_ depend on $F(u^0)$). In the usual statement of IFT (which requires only strong differentiability at$a$single point) the $\delta$ is allowed to depend on everything.2012-07-20