What does $C^\infty([a,b]; \mathbb{R})$ denote? I know it's a set of functions $[a,b] \to \mathbb{R}$. I think the $C$ stands for continuous. What does the $^\infty$ mean here?
What does $C^\infty([a,b]; \mathbb{R})$ denote?
-
0Infinitely differentiable-- smooth. – 2012-11-11
2 Answers
Smooth (ie, infinitely differentiable) functions from $[a,b]$ into $\mathbb{R}$.
-
1What exactly are the smoothness conditions at $a$ and at $b$? – 2012-11-11
-
0@MarcvanLeeuwen: Good question. See my answer. – 2012-11-11
-
0Existence of one sided derivatives is sufficient here. In general, I suppose the domain should be such that it is 'sufficiently rich' around the boundary to uniquely define a derivative. – 2012-11-11
There are two natural definitions.
- It could mean continuous functions on $[a,b]$ which are smooth in the interior $(a,b)$ and so that all derivatives have limits at the end point $a$ and $b$.
- It could also mean functions on $[a,b]$ that can be extended to infinitely differentiable functions on a larger, open interval $(a-\varepsilon,b+\varepsilon)$.
Fortunately, the two are equivalent. It is rather obvious that a function which is smooth in the second sense is smooth in the first sense. The converse can be proved by appeal to the theorem that, for every sequence $(c_n)$ of real numbers, there is an infinitely differentiable function $g$ defined on a neighbourhood of $a$ so that $g^{(n)}(a)=c_n$ for all $n$. Given a function $f\in C^\infty([a,b])$ in the first sense, put $c_n=\lim_{x\to a}f^{(n)}(x)$, pick a $g$ by the theorem mentioned, and extend $f$ by using $g$ instead to the left of $a$. Do the same at $b$. The result is a smooth function on a bigger interval, so $f$ smooth on $[a,b]$ in the second sense.
-
0The theorem you cite can be summarised as "every formal power series (no matter how divergent) is the Taylor series of a smooth function". It is also useful as an antidote against the optimistic attitude about Taylor series that is often transmitted to students. – 2012-11-11
-
0@MarcvanLeeuwen: Indeed. I was going to say that, but my answer was getting too long. Thanks for bringing it up. – 2012-11-11
-
0Would you have a reference to that theorem please? I think some care is necessary here in that the extension $g$ need not match $f$ on the original domain, eg, $f(x) = e^{-\frac{1}{x^2}} 1_{(0,1]}(x)$ on $[0,1]$ has $f^{k}(0) = 0$ for all $k$. – 2012-11-11
-
0@copper.hat: No, I don't have a reference at hand. It's sort of folklore, found in various textbooks. You don't need $g$ to match $f$ in the original domain, though. You just use $f$ inside, $g$ outside. The important point is that all the derivatives are continuous across the boundary point. – 2012-11-11
-
0Thanks! I just found it, it is called Borel's lemma. However,the construction seems a little artificial to me, in that it just defines a function $g$ whose derivatives match the limits of $f$'s derivatives. But since $f$ and $g$ need not match on the overlapping domains, one could have obtained the same result just by defining the derivative at the limit as the limit of the derivative and showing that it has the appropriate Frechet-like property there? – 2012-11-11