I did the following homework question, can you tell me if I have it right?
We want to show that the sequence $(n^2 \alpha \bmod 1)$ is equidistributed if $\alpha \in \mathbb{R} \setminus \mathbb{Q}$. To that end we consider the transformation $T: (x,y) \mapsto (x + \alpha, y + 2x + \alpha)$ on the $2$-dimensional torus $\mathbb{T}^2$ endowed with the Lebesgue measure $\lambda \times \lambda$.
a) Show that the action of $T$ on the torus is ergodic, i.e., if a measurable set $A \subset \mathbb{T}^2$ is invariant under $T$, then $(\lambda \times \lambda)(A) \in \{0, 1\}$. Show this by checking the following equivalent definition of ergodicity:
$\forall f \in L^2( \mathbb{T}^2)$, we have: if $f$ is $T$-invariant, then $f$ has to be constant almost everywhere.
Hint: Use Fourier series.
My answer:
$ \begin{align} f(x,y) &= \sum_{j,k \in \mathbb{Z}} c_{jk} e^{ijx} e^{iky} \\ &\stackrel{f = f\circ T}{=} \sum_{j,k \in \mathbb{Z}} c_{jk} e^{ij(x + \alpha)} e^{ik(y + 2x + \alpha)} \\ &= \sum_{j,k \in \mathbb{Z}} c_{jk} e^{ij\alpha + ik\alpha} e^{ijx + ik2x}e^{iky} \\ &\stackrel{j \rightarrow j-2k}{=} \sum_{j,k \in \mathbb{Z}} c_{(j-2k)k} e^{i(j-k)\alpha} e^{ijx}e^{iky} . \end{align}$
Now we want $c_{jk} = c_{(j-2k)k} e^{i(j-k)\alpha}$, and we have |c_{jk}| = |c_{(j-2k)k}| = |c_{(j-4k)k}| = \cdots, and so on.
The series only converges if $|c_{(j-4k)k}| \; \xrightarrow{k \rightarrow \infty} \; 0$ and so $c_{jk}$ has to be $0$, too.
b) For $x \in \mathbb{T}$ show that $ \frac{1}{m} \sum_{n=1}^m T^n_\ast (\delta_x \times \lambda) \rightarrow \lambda \times \lambda$ using the equidistribution of $(n \alpha \bmod 1)$.
My answer: $ \begin{align} \frac{1}{m} \sum_{n=1}^m T_\ast^n(\delta_x \times \lambda (A \times B)) &= \frac{1}{m} \sum_{n=1}^m \delta_x \times \lambda ((T^{-1})^n(A \times B)) \\ &= \frac{1}{m} \sum_{n=1}^m \delta_x \times \lambda (T^n(A \times B)) , \end{align} $ where I have the last equality because it doesn't matter which way the points are shifted. Then writing out what $T^n$ does I get:
$ \begin{align} \frac{1}{m} \sum_{n=1}^m \delta_x \times \lambda (T^n(A \times B)) &= \frac{1}{m} \sum_{n=1}^m \delta_x \times \lambda ( (A + \alpha n) \times (B + \alpha n) ) \\ &= \frac{1}{m} \sum_{n=1}^m \delta_x (A + \alpha n) \lambda(B + \alpha n) \\ &= \frac{1}{m} \sum_{n=1}^m \chi_{A + \alpha n}(x) \lambda(B + \alpha n) . \end{align} $
And then using that the Lebesgue measure $\lambda$ is translation invariant I get: $ \begin{align} \frac{1}{m} \sum_{n=1}^m \chi_{A + \alpha n}(x) \lambda(B + \alpha n) = \frac{1}{m} \sum_{n=1}^m \chi_{A + \alpha n}(x) \lambda(B) . \end{align} $
And finally, by using the ergodic theorem:
$ \begin{align} \frac{1}{m} \sum_{n=1}^m \chi_{A + \alpha n}(x) \lambda(B)= \lambda(B) \int_{\mathbb{T}} \chi_{A + \alpha n} (x) d \lambda(x) = \lambda(B)\lambda(A) = \lambda \times \lambda (A \times B) . \end{align} $
c) For $\eta \in (0,1)$ and $x,y \in \mathbb{T}$ define the two sequences $ \begin{align*} \mu_m &= \frac{1}{m} \sum_{n=1}^m T^n_\ast \left(\delta_x \times \left(\frac{1}{2 \eta} \left. \lambda \right \vert_{[y-\eta, y + \eta]} \right) \right) \\ \nu_m &= \frac{1}{m} \sum_{n=1}^m T^n_\ast \left(\delta_x \times \left( \frac{1}{1 - 2 \eta} \left. \lambda \right\vert_{\mathbb{T} \smallsetminus [y-\eta, y + \eta]} \right) \right) \end{align*} $
Using exercise 3 of assignment 10 and weak$^\ast$-compactness of the unit ball we know that there exists a subsequence in $\mathbb{N}$ such that both sequences converge along these subsequences. Call the limit points $\mu$ and $\nu$ respectively. Show that $2 \eta \mu + (1 - 2 \eta) \nu = \lambda \times \lambda$.
My answer:
Exercies 3 of assignment 10 was to show that any weak$^\ast$ limit point $\mu$ of the sequence $\mu_n = \frac{1}{n}\sum_{j=0}^{n-1} T_\ast^j \nu_n$ is a Borel probability measure with $\mu = T_\ast \mu$.
$ 2 \eta \lim_{m \to \infty} \frac{1}{m} \sum_{n=1}^m T_\ast^n \left(\delta_x \times \left( \frac{1}{2 \eta} \left. \lambda \right\vert_{[y- \eta, y + \eta]} \right) \right) + (1 - 2 \eta) \lim_{m \to \infty} \frac{1}{m} \sum_{n=1}^m T_\ast^n \left( \delta_x \times \left( \frac{1}{1- 2 \eta} \left. \lambda \right\vert_{\mathbb{T} \smallsetminus [y- \eta, y + \eta]} \right) \right) , $ which, by part (b), is equal to $ \left. \lambda \times \lambda \right\vert_{[y - \eta , y + \eta]} + \left. \lambda \times \lambda \right \vert_{\mathbb{T} \smallsetminus [y - \eta , y + \eta]} = \lambda \times \lambda. $
d) Using the following proposition, show that $\mu = \lambda \times \lambda$.
Proposition: A $T$-invariant probability measure is extremal if and only if its action is ergodic.
My answer:
Distinguish the cases $\eta \geq \frac{1}{2}$ and $\eta < \frac{1}{2}$.
If $\eta < \frac{1}{2}$ then by c) $\lambda \times \lambda = 2 \eta \mu + (1 - 2 \eta) \nu$ and since $2 \eta < 1$, by extremality, $\lambda \times \lambda = \mu = \nu$.
If $\eta \geq \frac{1}{2}$ then $[y - \eta , y + \eta] = \mathbb{T}$ and so
$ \begin{align} \mu(A \times B) &= \lim_{m \to \infty} \mu_m (A \times B) \\ &= \lim_{m \to \infty} \frac{1}{m} \frac{1}{2 \eta} \sum_{n=1}^m \chi_A(x + \alpha n) \lambda \mid_{[y - \eta , y + \eta]} (B) \\ &\stackrel{b)}{=} \frac{1}{ 2 \eta} \lambda \times \lambda \end{align}$
So I think I got the sums wrong here.
e) Show that $\mu_m \to \lambda \times \lambda$. To that end prove and apply the following:
Lemma: Let $X$ be a metric space and $x \in X$, $x_n$ a sequence in $X$. Assume that each subsequence of $x_n$ has a subsequence converging to $x$. Then $x_n$ itself converges to $x$.
My answer:
Assume that $x_n$ converges to $y \neq x$. Then there is a (sub)sequence (the sequence itself is a subsequence) not converging to $x$. Contradiction. Same argument for $x_n$ diverges.
Now with c) and d), $\mu_m \to \mu = \lambda \times \lambda$.
I'm stuck on:
f) Show that for all $f \in C(\mathbb{T}^2)$ and for all $\varepsilon > 0$, there exists $\eta > 0$ such that we have
$ \left\vert \int f d \mu_m - \int f d \omega_m \right\vert < \varepsilon ,$ where $ \omega_m = \frac{1}{m} \sum_{n=1}^m T^n_\ast (\delta_x \times \delta_y) .$
Thanks for your help!