3
$\begingroup$

I am having trouble with the following proof:

Prove that if $f$ is differentiable on a closed interval $[a, b]$ then for every continuous function $g$ with the property $\int\limits_a^bf(x)g(x)dx = 0$ implies $f(x) = 0$

The only idea I have is rewriting this as the Riemann integral and then noticing that $\Delta x > 0$ and so either $g(x) = 0$ or $f(x)=0$. So if $g \neq 0$ then $f$ must equal $0$. However what happens when $g = 0$?

  • 2
    I cannot parse the statement. Do you mean to say that if $f$ is differentiable and for every continuous function $g$, $\int_a^b f(x)g(x)dx=0$, then $f(x)=0$ for all $x\in[a,b]$? Your strategy doesn't seem to match the problem. The statement (if I correctly translated it) assumes that the integral is zero for all $g$, so you can pick $g$ at your convenience.2012-11-28
  • 0
    yes that is correct2012-11-28
  • 2
    Perhaps not much of a hint, but I'll mention that (1) continuity of $f$ suffices, (2) If $h$ is continuous, $h(x)\geq0$ for all $x$ and $\int_a^b h(x)dx=0$, then $h(x)=0$ for all $x$, and (3) If you choose $g$ wisely there is a quick solution using (2).2012-11-28
  • 0
    You may be interested to know that this is a form of the _fundamental lemma of the calculus of variations_. I should mention that Jonas' hint is excellent and really helps you to nail down _why_ this theorem is true.2012-11-28
  • 0
    Could you explain why I can choose $g$ at my convenience? If this is so I would take $g = C \neq 0$ some constant and then I have $f=0$. This just seems utterly wrong and I believe I am missing the point...2012-11-28
  • 0
    You can choose $g$ at your convenience because this is supposed to hold for _all_ continuous $g$ (so it holds for any one which you pick). You cannot simply choose $g = 1$ because you are not sure that $f$ is strictly non-negative (which is necessary as noted by Jonas)2012-11-28
  • 0
    Is there a way to transform $g(x)$ to be negative when $f(x)$ is negative? So that I could essentially get $1$ and then appeal to (2)?2012-11-28
  • 1
    Well you could take $g = f$.2012-11-28
  • 0
    And EuYu has given a wise choice as referred to above. EuYu's answer strengthens the theorem by restricting $g$ to smooth functions. It may interest you that $g$ can be restricted to polynomial functions, e.g. using the Weierstrass approximation theorem.2012-11-28

2 Answers 2

2

I will strengthen the lemma a little bit and provide a proof in a slightly different style than what you have (which although simple, does not reveal much of the structure of the problem).

The idea of the proof is that $g$ plays the role of "testing" where the function is non-zero. Think of choosing $g$ in the role analogous to an indicator function which is zero everywhere except for where $f$ is non-zero.

The following proof may seem a bit technical, but it is actually quite intuitive. It uses relatively elementary ideas and the idea of the proof generalizes into higher dimensions.

Lemma: Suppose that we have $f$ continuous on $[a,\ b]$ such that $$\int_a^b f(x)g(x)\ \rm dx = 0$$ for all $g\in C^\infty\left([a,\ b]\right)$ (that is for all smooth functions $g$). Then $f$ is identically zero on $[a,\ b]$.

Proof: Suppose for the sake of contradiction that $f$ is not identically zero. Without loss of generality, there exists some point $x_0 \in (a,\ b)$ such that $f(x_0) > 0$. Since $f$ is continuous, there is an $\epsilon$-neighborhood $N_\epsilon(x_0)$ around $x_0$ such that $$|f(x) - f(x_0)| < \frac{f(x_0)}{2}$$ for all $x\in N_\epsilon(x_0)$.

Consider the bump function defined by $$g(x) = \begin{cases} \exp\left(\frac{-1}{\epsilon^2-(x-x_0)^2}\right) & \text{for}\ |x-x_0|<\epsilon \\ 0 & \text{elsewhere} \end{cases}$$ Note that $g$ is a function which is smooth everywhere (I will not prove this here). This is essentially the Gaussian function scaled to fit inside our $\epsilon$-neighborhood. Notice that $g > 0$ on $N_\epsilon(x_0)$. The function is shown below

                         bump function

Taking $g$ as our choice function, the integral is reduced to $N_\epsilon(x_0)$ $$\int_a^b f(x)g(x)\ \mathrm{d}x = \int_{x_0-\epsilon}^{x_0 + \epsilon}f(x)g(x)\ \mathrm{d}x$$ The function $fg$ is positive and clearly does not change sign inside the neighborhood so we invoke the mean value theorem for integrals to get $$\int_{x_0-\epsilon}^{x_0 + \epsilon}f(x)g(x)\ \mathrm{d}x = f(x_0')g(x_0')\int_{x_0 - \epsilon}^{x_0 + \epsilon}1\ \mathrm{d}x = 2f(x_0')g(x_0')\epsilon$$ where $x_0'$ is some point in $N_\epsilon(x_0)$. The latter quantity is evidently non-zero $$2f(x_0')g(x_0')\epsilon > f(x_0)g(x_0')\epsilon > 0$$ This contradicts our initial assumptions. Therefore it must be that $f= 0$ on $[a,\ b]$. $\square$

As I mentioned, the role of the bump function $g$ acts as an indicator which picks out the non-zero portions of $f$. Continuity forces these portions to be significant, in the sense that it is enough for force the integral to be non-zero. The only part which may seem a bit off putting is the use of the bump function. This is actually unnecessary for your version (I used it for smoothness, whereas you only need continuity). You can replace the bump with say, a triangle, the formula of which is easy to write down and the proof will still hold for continuous functions.

Note that this is in fact a strengthening even though it may seem more restrictive in using $C^\infty$ functions. To see this, note that if the hypothesis is satisfies for all continuous functions, then it is trivially satisfied for all smooth functions as well since $C^\infty \subseteq C^0$. Therefore this version implies yours.

1

By continuity, it'll be enough to prove the result for $a < x < b$. Set $g_n(t) = t - (x-\frac{2}{n})$, for $x - \frac{2}{n} \le t \le x - \frac{1}{n}$, $g_n(t) = \frac{1}{n}$ for $x - \frac{1}{n} \le t \le x$, $g_n(t) = (x+\frac{1}{n}) - t$ for $x \le t \le x + \frac{1}{n}$ and $0$ otherwise. It's easily verified that $g_n$ is a continuous function for (large) $n$, so we have that

$0 = \int_{a}^b f(t) g_n(t) dt$, for every $n$. Notice that we may write this integral as $\int_{x - \frac{2}{n}}^{x - \frac{1}{n}} (t - x + \frac{2}{n})f(t) dt + \frac{1}{n} \int_{x - \frac{1}{n}}^x f(t) dt + \int_x^{x + \frac{1}{n}} (x + \frac{1}{n} - t)f(t) dt $.

Now $\frac{1}{n} \int_{x - \frac{1}{n}}^x f(t) dt \to f(x)$ as $n \to \infty$, and we may estimate the other integrals as $C \frac{1}{n}$, for some constant $C > 0$, which gives the result.