1
$\begingroup$

Let $$X = C^{k+2, \alpha}(S(T)),$$ $$Y = C^{k, \alpha}(S(T)),$$ where $S(T) = S^1 \times [0,T]$. Don't think of $T$ as fixed, but varying. So these Banach spaces contains functions with different time intervals. Suppose there is a map $F:X \to Y$ with $$F(u) = u_t - a(x,t,u,u_x,u_{xx})$$ where $a(x,t,z,p,q)$ is smooth in its arguments. We want to show that there is a unique $u^*$ such that $F(u^*) = 0.$ To do this, we can show that the derivative at $u$ $$DF(u)v = v_t - a_z(u)v - a_p(u)v_x - a_q(u)v_{xx}$$ is invertible (or bijective or linear isomorphism) at a particular function $u$. It is invertible, and we also know that the inverse mappings $DF(u)^{-1}$ are varying continuously and are uniformly bounded (for bounded $u$) regardless of the time interval in the domain.

Now can someone please explain these points I don't understand:

If there is a $u^0 \in X$ such that $F(u^0)$ is small, then the inverse function theorem implies that for all small $s \in Y$, there exists a unique $u$ such that $F(u) = s$, and $u$ depends continuously on $s$.

Is that right? By "small", I guess the author means close to zero. My understanding is, if $F(u^0)$ is in a neighbourhood of zero, for all functions $s$ in that same neighbourhood of $0$, we can find a $u$ in some neighbourhood of $u^0$ such that $F(u) = s$. Is that correct? Why does $u$ depend continuously on $s$?

Now if $u^0 = a(x,t,0,0,0)t \in X$, provided $T$ is small enough, $F(u^0)$ is as close to $0$ as required. This is the point when we take the time interval $[0,T]$ to be short.

It is true that $F(u^0) \to 0$ as $t \to 0$. But this is pointwise convergence, don't we need convergence in the $Y$ norm? Also, how can we be sure that 0 in fact lies in the neighbourhood of $Y$ that becomes invertible?

  • 0
    What is the source of your quotes? My first reading was a bit hasty. There seems now to be a problem with the second statement. (The first statement is probably okay).2012-07-18
  • 0
    @WillieWong Thanks for your help. The source is "Curvature-Driven Flows: A variational approach" by Fred Almgren, Jean E. Taylor, and Lihe Wang. If you can't find it, here's the most relevant pages http://s14.postimage.org/9yhvzrs1d/image.jpg, http://s13.postimage.org/qv7c066vb/image.jpg and http://s17.postimage.org/tyhpehpb3/image.jpg.2012-07-18
  • 0
    @WillieWong Note that I changed the notation and some things are different since I am using this paper in conjunction with another paper ("The Curve Shortening Problem" by Xi-Ping Zhu and Chou).2012-07-18
  • 0
    Also, the paper is in the book "Selected Works of Frederick J Almgren, Jr."2012-07-18
  • 0
    Sorry I got the order wrong. It should be: http://s17.postimage.org/tyhpehpb3/image.jpg, http://s13.postimage.org/qv7c066vb/image.jpg, and http://s14.postimage.org/9yhvzrs1d/image.jpg,2012-07-18
  • 0
    Note that in your cited source $G$ (which I think is equivalent to your $a$) does _not_ depend on $t$. Also, it looks like the pages you scanned are only the sketch of the proof, with the complete proof to follow. Perhaps the required computations are done there? If not I'll look for the paper tomorrow to see.2012-07-18
  • 0
    @WillieWong In the other paper, they use G(x,t,0,0,0); maybe this is an error. You're right that they do give more details afterwards, but these details are for showing Frechet differentiability and showing the bounds of the inverse of the derivatives and so on. They conclude with "We have now shown that the DF[g]'s are uniformly isomorphisms as required. The remainder of argument to prove the theorem was set forth above". So I don't think the rest of the pages will help.2012-07-18
  • 0
    @WillieWong I think it should be $G(x,0,0,0,0)$ (like in the Almgren paper) and not $G(x,t,0,0,0)$. I just checked with yet another paper which seems to confirm this (but gives no other useful details). So maybe the second statement will hold.2012-07-19
  • 0
    That the second statement holds depend on the form of the function $G$ and also on the precise form of the functional $F$ (namely, it depends also on the terms in the denominator that you omited), since it appears that certain cancellations will still be required for the higher derivative terms.2012-07-19
  • 0
    @WillieWong Thanks for your help. The paper I scanned does a general case and is too complex for the case I need. So my function $G$ is just smooth in its arguments, and there is no denominator in my $F$, it's just what I wrote up top. Maybe uniform parabolicity of the (parabolic) PDE will help but I'm not sure. I'll work through your comments.2012-07-19
  • 0
    Also, you should check to see which spaces you are using! Since this is a parabolic problem, I would've thought that you'd be using a parabolic Holder space; in your question statement you seem to be using the usual kind of Holder spaces.2012-07-19
  • 0
    @WillieWong Sorry for my oversight. I did mean the parabolic Holder space with my notation, not the usual space.2012-07-19

1 Answers 1

2
  1. Your interpretation is not quite right. Inverse function theorem (under your assumptions) states that for $u^0$, there exists a neighborhood $N^0 \ni F(u^0)$ such that $F$ is invertible on $N^0$ with continuous inverse (in fact continuously differentiable). By the uniform bound on $DF^{-1}$ you can take the size of $N^0$ to be uniform: in particular there exists a constant $\delta$ such that $N^0 \supseteq F(u_0) + B_\delta$. So if $\|F(u_0)\| < \delta$, $0\in N^0$ and there exists $\epsilon$ such that $B_\epsilon \subseteq N^0$, meaning that for $s\in B_\epsilon \subseteq N^0$ we have a continuous map $F^{-1}: B_\epsilon \to X$.

  2. Yes, you need convergence in the $Y$ norm. Note that (writing $a(x,t) = a(x,t,0,0,0)$) $$F(ta(x,t)) = a(x,t) + ta_t(x,t) - a(x,t,ta, ta_x, ta_{xx})$$ The first and third terms combine to give (using the differentiability of $a$) a quantity that is $O(t)$. This shows that in particular that all $x$ derivatives of $F(ta(x,t))$ are $O(t)$ and hence can be made as small as you want by making $t$ small.

    There is, however, a problem with the $t$ derivatives, if you take $F(ta(x,t))_t |_{t = 0}$ you get $$ a_t(x,0) + a_t(x,0) - a_t(x,0) - a_u(x,0) a(x,0) - a_p(x,0) a_x(x,0) - a_q(x,0) a_{xx}(x,0) $$ which is not in general 0. So whichever source you are quoting from is incomplete in its argument.

  • 0
    Thanks a lot. I'm afraid I have a few questions: 1) I guess you used some variant of the bounded inverse theorem to say that $DF(u)$ is uniformly bounded (since we only know that $DF^{-1}(u)$ is uniformly bounded). 2) what do you mean by take the size of $N^0$ to be uniform? Is there some theorem that says that uniformly bounded derivative gives such a property about taking $N^0$ to be uniform?2012-07-18
  • 0
    3) In the last sentence of your first point (1.), I should read that as "So if $\lVert F(u_0) \rVert < \delta, then 0 \in N^0$, ...". I'll need to think more about what you wrote but just checking whether that's right. 4) Sorry about not defining $G$, $G$ is actually the smooth function $a$ that I defined. By "$G(x,t,0,0,0)$ lies in bounded ball inside Y" do you not mean that "$\lVert G(x,t,0,0,0)\rVert_Y \leq \text{const}$? Sorry for so many questions.2012-07-18
  • 1
    (1) I meant $DF^{-1}$, not $DF$. Sorry about the typo. (3) Yes. (2) The meaning of uniform $N^0$ is after the colon in the same sentence. You need to use uniform boundedness of $DF^{-1}$ in addition to the fact that _your_ $F$ is obviously continuously differentiable, and furtherore $DF$ is uniformly bounded on any bounded subset of $X$. These use the properties of $a$. With those bounds you can get an effective version of the Inverse Function Theorem.2012-07-18
  • 0
    Ah wait, sorry, I see where the confusion is with point 4. Let me edit the answer to fix that.2012-07-18
  • 0
    @WillieWong You say that since $DF^{-1}$ has a uniform bound, we can take the size of $N^0$ to be uniform. But your definition of uniform, isn't that just what we get from the ordinary IFT? I.e. there is an open neighbourhood (in particular there is an open ball inside this neighbourhood) around the point of interest?2012-07-20
  • 1
    @TagWoh: Uniformity is the statement that the $\delta$ is a constant (in particular should _not_ depend on $F(u^0)$). In the usual statement of IFT (which requires only strong differentiability at a single point) the $\delta$ is allowed to depend on everything.2012-07-20