3
$\begingroup$

I am having trouble with the following proof:

Prove that if $f$ is differentiable on a closed interval $[a, b]$ then for every continuous function $g$ with the property $\int\limits_a^bf(x)g(x)dx = 0$ implies $f(x) = 0$

The only idea I have is rewriting this as the Riemann integral and then noticing that $\Delta x > 0$ and so either $g(x) = 0$ or $f(x)=0$. So if $g \neq 0$ then $f$ must equal $0$. However what happens when $g = 0$?

  • 0
    And EuYu has given a wise choice as referred to above. EuYu's answer strengthens the theorem by restricting $g$ to smooth functions. It may interest you that $g$ can be restricted to polynomial functions, e.g. using the Weierstrass approximation theorem.2012-11-28

2 Answers 2

2

I will strengthen the lemma a little bit and provide a proof in a slightly different style than what you have (which although simple, does not reveal much of the structure of the problem).

The idea of the proof is that $g$ plays the role of "testing" where the function is non-zero. Think of choosing $g$ in the role analogous to an indicator function which is zero everywhere except for where $f$ is non-zero.

The following proof may seem a bit technical, but it is actually quite intuitive. It uses relatively elementary ideas and the idea of the proof generalizes into higher dimensions.

Lemma: Suppose that we have $f$ continuous on $[a,\ b]$ such that $\int_a^b f(x)g(x)\ \rm dx = 0$ for all $g\in C^\infty\left([a,\ b]\right)$ (that is for all smooth functions $g$). Then $f$ is identically zero on $[a,\ b]$.

Proof: Suppose for the sake of contradiction that $f$ is not identically zero. Without loss of generality, there exists some point $x_0 \in (a,\ b)$ such that $f(x_0) > 0$. Since $f$ is continuous, there is an $\epsilon$-neighborhood $N_\epsilon(x_0)$ around $x_0$ such that $|f(x) - f(x_0)| < \frac{f(x_0)}{2}$ for all $x\in N_\epsilon(x_0)$.

Consider the bump function defined by $g(x) = \begin{cases} \exp\left(\frac{-1}{\epsilon^2-(x-x_0)^2}\right) & \text{for}\ |x-x_0|<\epsilon \\ 0 & \text{elsewhere} \end{cases}$ Note that $g$ is a function which is smooth everywhere (I will not prove this here). This is essentially the Gaussian function scaled to fit inside our $\epsilon$-neighborhood. Notice that $g > 0$ on $N_\epsilon(x_0)$. The function is shown below

                         bump function

Taking $g$ as our choice function, the integral is reduced to $N_\epsilon(x_0)$ $\int_a^b f(x)g(x)\ \mathrm{d}x = \int_{x_0-\epsilon}^{x_0 + \epsilon}f(x)g(x)\ \mathrm{d}x$ The function $fg$ is positive and clearly does not change sign inside the neighborhood so we invoke the mean value theorem for integrals to get $\int_{x_0-\epsilon}^{x_0 + \epsilon}f(x)g(x)\ \mathrm{d}x = f(x_0')g(x_0')\int_{x_0 - \epsilon}^{x_0 + \epsilon}1\ \mathrm{d}x = 2f(x_0')g(x_0')\epsilon$ where $x_0'$ is some point in $N_\epsilon(x_0)$. The latter quantity is evidently non-zero $2f(x_0')g(x_0')\epsilon > f(x_0)g(x_0')\epsilon > 0$ This contradicts our initial assumptions. Therefore it must be that $f= 0$ on $[a,\ b]$. $\square$

As I mentioned, the role of the bump function $g$ acts as an indicator which picks out the non-zero portions of $f$. Continuity forces these portions to be significant, in the sense that it is enough for force the integral to be non-zero. The only part which may seem a bit off putting is the use of the bump function. This is actually unnecessary for your version (I used it for smoothness, whereas you only need continuity). You can replace the bump with say, a triangle, the formula of which is easy to write down and the proof will still hold for continuous functions.

Note that this is in fact a strengthening even though it may seem more restrictive in using $C^\infty$ functions. To see this, note that if the hypothesis is satisfies for all continuous functions, then it is trivially satisfied for all smooth functions as well since $C^\infty \subseteq C^0$. Therefore this version implies yours.

1

By continuity, it'll be enough to prove the result for $a < x < b$. Set $g_n(t) = t - (x-\frac{2}{n})$, for $x - \frac{2}{n} \le t \le x - \frac{1}{n}$, $g_n(t) = \frac{1}{n}$ for $x - \frac{1}{n} \le t \le x$, $g_n(t) = (x+\frac{1}{n}) - t$ for $x \le t \le x + \frac{1}{n}$ and $0$ otherwise. It's easily verified that $g_n$ is a continuous function for (large) $n$, so we have that

$0 = \int_{a}^b f(t) g_n(t) dt$, for every $n$. Notice that we may write this integral as $\int_{x - \frac{2}{n}}^{x - \frac{1}{n}} (t - x + \frac{2}{n})f(t) dt + \frac{1}{n} \int_{x - \frac{1}{n}}^x f(t) dt + \int_x^{x + \frac{1}{n}} (x + \frac{1}{n} - t)f(t) dt $.

Now $\frac{1}{n} \int_{x - \frac{1}{n}}^x f(t) dt \to f(x)$ as $n \to \infty$, and we may estimate the other integrals as $C \frac{1}{n}$, for some constant $C > 0$, which gives the result.