Let $X = C^{k+2, \alpha}(S(T)),$ $Y = C^{k, \alpha}(S(T)),$ where $S(T) = S^1 \times [0,T]$. Don't think of $T$ as fixed, but varying. So these Banach spaces contains functions with different time intervals. Suppose there is a map $F:X \to Y$ with $F(u) = u_t - a(x,t,u,u_x,u_{xx})$ where $a(x,t,z,p,q)$ is smooth in its arguments. We want to show that there is a unique $u^*$ such that $F(u^*) = 0.$ To do this, we can show that the derivative at $u$ $DF(u)v = v_t - a_z(u)v - a_p(u)v_x - a_q(u)v_{xx}$ is invertible (or bijective or linear isomorphism) at a particular function $u$. It is invertible, and we also know that the inverse mappings $DF(u)^{-1}$ are varying continuously and are uniformly bounded (for bounded $u$) regardless of the time interval in the domain.
Now can someone please explain these points I don't understand:
If there is a $u^0 \in X$ such that $F(u^0)$ is small, then the inverse function theorem implies that for all small $s \in Y$, there exists a unique $u$ such that $F(u) = s$, and $u$ depends continuously on $s$.
Is that right? By "small", I guess the author means close to zero. My understanding is, if $F(u^0)$ is in a neighbourhood of zero, for all functions $s$ in that same neighbourhood of $0$, we can find a $u$ in some neighbourhood of $u^0$ such that $F(u) = s$. Is that correct? Why does $u$ depend continuously on $s$?
Now if $u^0 = a(x,t,0,0,0)t \in X$, provided $T$ is small enough, $F(u^0)$ is as close to $0$ as required. This is the point when we take the time interval $[0,T]$ to be short.
It is true that $F(u^0) \to 0$ as $t \to 0$. But this is pointwise convergence, don't we need convergence in the $Y$ norm? Also, how can we be sure that 0 in fact lies in the neighbourhood of $Y$ that becomes invertible?