1
$\begingroup$

Let $f\in L^1(\mathbb{R})$ and let $V_f$ be the closed linear subspace of $L^1(\mathbb{R})$ generated by the translates $f(\cdot - y)$ of $f$. If $V_f=L^1(\mathbb{R})$, I want to show that $\hat{f}$ never vanishes.

We have $\hat{f}(\xi_0)=0$ iff $\hat{h}(\xi_0)=0$ for $h\in V_f$, but I'm not sure how to proceed beyond this. The Riemann-Lebesgue Lemma gives us that $L^1(\mathbb{R})$ is sent to $C_0(\mathbb{R})$ under the Fourier transform, $C_0(\mathbb{R})$ seems to contain functions that vanish...

I got this problem from 3.3 here: http://www.math.ucdavis.edu/~jlirion/course_notes/Prelim_Solutions.pdf.

EDITED: I originally neglected to say that Vf is the closed subspace generated by the translates, not the translates themselves. I don't know if it is reasonable to suppose that this subspace is L1 itself or just a subset...

  • 3
    The point is that $L^1$ contains functions $g$ such that $\hat{g}(\xi_0) \ne 0$, and so those can't be in $V_f$.2012-05-09

2 Answers 2

3

Assume by contradiction that

$\widehat{f}(\psi) =0$

Then, if $h$ is a translate of $f$ it is easy to show that

$\widehat{h}(\psi)=0$

Now, let $g \in V_f$. Then $g= \lim h_n$ where $h_n$ are linear combinations of translates of $f$. Thus

$\widehat{h_n}(\psi) =0$

and since $h_n \to g$ in $L^1$ we get $\widehat{h_n}(\psi) \to \widehat{g}(\psi)$.

Thus, we indeed get $ \widehat{g}(\psi) =0 \forall g \in V_f$

Now to get the contradiction, we need to use the fact that for each $\psi \in \widehat{G}$ there exists some $g \in L^1$ so that $\widehat{g}(\psi) \neq 0$.

This is easy, let $u$ be any compactly supported continuous function on $\hat{G}$ so that $u(\psi) \neq 0$. Then $g= \check{u*\tilde{u}}$ works, where $\check{}$ is the inverse Fourier Transform. This last part can probably be proven much easier, for example if $g$ is non-zero, then $\hat{g}$ is not vanishing at some point, and then by multiplying $g$ by the right character $\widehat{e^{...}g}$ is not vanishing at $\psi$.

  • 0
    Appreciate the elaboration, thanks.2012-05-09
1

Given $f\in L^1(\mathbb{R})$ let $I(f)=\operatorname{clos}(\operatorname{span}(f_\lambda:\lambda\in\mathbb{R}))$ be the $L^1$-closure of the linear span of the translates $f_\lambda$ ($f_\lambda(x)=f(x+\lambda)$). Then the Wiener tauberian (~1932) theorem says that $I(f)=L^1(\mathbb{R})$ if and only if $\hat{f}$ is non-zero.

In fact this can be lifted into the subject of commutative Banach algebras and Gelfand theory.