13
$\begingroup$

I've been trying to work out the best way to understand why Fourier series converge, and it's a little embarrassing but I don't even know a rigorous proof. Can someone please help put me on the right track to thinking about these issue's in the proper way? I am especially interested any geometric ways to think about the convergence issue (something I suppose which takes advantage of the fact that each component $e^{in\theta}$ corresponds to some point along the unit circle).

Thanks!

  • 2
    But Fourier series don't con$v$erge; at least in general they $d$on't converge pointwise. You need extra conditions to ensure pointwise convergence.2010-10-13

5 Answers 5

10

I don't know about a geometric interpretation, but here is a brief sketch of a proof. First we need to be precise about what we mean by "convergence." In the naive sense, Fourier series don't always converge - that is, pointwise. (If you change the value of a function at a single point, the Fourier series remains unchanged.) The sense in which they do always converge is in the Hilbert space $L^2([0, 1])$, which has inner product defined by $\langle f, g \rangle = \int_0^1 \overline{g(x)} f(x) dx$ inducing a norm, which induces a metric. In $L^2([0, 1])$ let $X$ be the subspace spanned by the functions $e^{2\pi i nx}, n \in \mathbb{Z}$. It is fairly straightforward to verify that the functions $e^{2\pi i nx}$ are orthogonal and have norm $1$; generally I think about this in a representation-theoretic way, as a special case of the orthogonality relations for characters.

Then the statement that Fourier series converge is equivalent to the statement that $X$ is dense in $L^2([0, 1])$. Why? Given a sequence in $X$ converging to an element of $L^2([0, 1])$ we can compute the Fourier coefficients, which depend continuously on the sequence and hence which converge to a limit. That these coefficients actually represent the element of $L^2([0, 1])$ is a standard Hilbert space argument and you should take a course in functional analysis if you want to learn this kind of stuff thoroughly.

Now, something else you need to know about $L^2([0, 1])$ is that the subspace $Y$ consisting of all step functions is dense in it. (If you have trouble believing this, first convince yourself that $Y$ is dense in the continuous functions on $[0, 1]$ and then believe me that the continuous functions are dense in $L^2([0, 1])$. In fact, $L^2([0, 1])$ can be defined as the completion of $C([0, 1])$ with respect to the $L^2$ norm.) So to show that $X$ is dense, it suffices to show that the closure of $X$ contains $Y$. In fact, it suffices to show that $X$ has as a limit point a step function with a single bump, say

$a(x) = \begin{cases} 0 \text{ if } 0 \le x \le \frac{1}{3}, \frac{2}{3} \le x \le 1 \\ 1 \text{ otherwise} \end{cases}$

and to take linear combinations, translations, and dilations of this. In other words, it suffices to prove convergence for square waves. But one can do the computations directly here. There is a standard picture to stare at, and of course if you have ever actually heard a square wave you should believe that audio engineers, at least, are perfectly capable of approximating square waves by sines and cosines.

  • 0
    This is great! I am going to go through my functional analysis books and make sure I understand this proof thoroughly, thanks for the road map.2010-10-13
4

You can write the partial sum $S_n(x)$ as an integral ${1\over 2\pi}\int_{-\pi}^\pi D_n(t) f(x-t)dt,$ where the weight function or "kernel" $D_n(t)$ can be easily computed and graphed once and for all. One obtains $D_n(t)={\sin((n+1/2)t)\over \sin(t/2)}.$ So $S_n(x)$ is an "average" of $f$-values from the neighborhood of $x$. The essential point is that $D_n(t)$ is heavily concentrated around $t=0$ and oscillates quickly far away from $0$.

4

Since your question was about the geometry behind convergence, I'll chime in with a very geometric way to think about these concepts. However, as Qiaochu Yuan mentions, in order to do so, we must first nail down in what sense we mean convergence. I'll discuss the "big three" types of convergence: pointwise, uniform, and mean-square (also called $L^2$) convergence.

Let's begin with defining a notion of $error$ between $f(x)$ and the $N$th partial sum of its Fourier series, denoted by $F_N(x)$, on $-\ell. Define the (absolute) pointwise error, $p(x)$, by $p(x)=|f(x)-F_N(x)|, \quad -\ell The geometry of the situation belies its name: $p(x)$ represents the point-by-point difference (or error) between $f(x)$ and $F_N(x)$.

We can then define the following three types of convergence based on the behavior of $p(x)$ as $N\to\infty$.

  • $F_N(x)$ converges pointwise to $f(x)$ on $-\ell if $p_N(x)\to 0 \text{ as } N\to\infty \text{ for each fixed }x\in(-\ell,\ell).$
  • $F_N(x)$ converges uniformly to $f(x)$ on $-\ell if $\sup_{-\ell
  • $F_N(x)$ converges in the mean-square or $L^2$ sense to $f(x)$ on $-\ell if $\int_{-\ell}^\ell p_N^2(x)\,dx\to 0 \text{ as } N\to\infty.$

Think of each of these in terms of what is happening with the pointwise error as $N\to \infty$. The first says that at a fixed $x$, the difference between $f(x)$ and $F_N(x)$ is going to zero. This may happen for some $x$ in the interval and fail for others. On the other hand, uniform convergence says that the supremum of all pointwise errors tends to zero. Finally, the mean-square error says that the area under $p^2(x)$ must tend to zero as $N\to\infty$.

The first is a very local way to measure error (at a point), whereas the second two are global ways to measure the error (across the entire interval).

We can formulate this in terms of norms by setting $\|f-F_N\|_\infty:=\sup_{-\ell Then, $F_N(x)\to f(x)$ uniformly on $-\ell provided $\|f-F_N\|_\infty\to 0$ as $N\to\infty$. (This is why we call it the uniform norm!)

On the other hand, if we set $\|f-F_N\|_{L^2}:=\sqrt{\int_{-\ell}^\ell |f(x)-F_N(x)|^2\,dx},$ then $F_N(x)\to f(x)$ in the $L^2$ sense on $-\ell provided $\|f-F_N\|_{L^2}\to 0$ as $N\to\infty$. (This is called the $L^2$ norm on $-\ell.)

To illustrate this geometrically, here's $f(x)=x^2$ (black) and its Fourier sine series $F_N(x)$ (blue) on $0 for $N=5,\dots,50$ and the corresponding pointwise error (red). We can see this series converges pointwise but not uniformly on $0. You can also get an idea of the $L^2$ convergence by envisioning the area under the square of the red curve and seeing it tend to zero also. I was going to post that picture as well, put the shaded area is so thin it is difficult to see.

enter image description here

These illustrations are of course not a proof of the convergences, but simply a way to interpret them geometrically.

For the sake of completeness, here's an example which does converge uniformly: the same function and interval as above, but $F_N(x)$ is the Fourier cosine series.

enter image description here

Hope that helps.

3

The way I see Fourier series (especially the trigonometric expansions) you simply draw the initial sine and cosine lines at the macro level, and then you start dealing with higher frequencies that correct the smaller details.

So in the case of an infinite sum, you always go about correcting a bit more on a smaller scale, and at the limit point you have your original function.

(I'm pretty sure I wasn't clear about it, and that I need to wave my arms around, so I've set this CW so if anyone gets the idea and thinks they can clarify it will be easier to do so.)

  • 0
    I think the main issue here is that although the picture of the convergence is nice, it doesn't really explain why an infinite series of sines and cosines should converge. I think a by considering a geometrical interpretation of Christian's answer, regarding the error term of the partial fourier sums, we could argue that adding additional terms will approximate the function more closely. I will have to think about this much more thoroughly before editing this answer however.2010-10-14
0

please lisen this vedio lecture

http://www.youtube.com/watch?v=3lS5ZMsfUdQ

You will understand the geometry behind the convergence of fourier series