Suppose we are given random variables $X_1, X_2, \dots$, which are i.i.d., in $L^2$ and such that $\mathrm{E}[X_i]=0$, $\mathrm{Var}[X_i] = 1$, say. Let $S_n = \sum_{i=1}^n X_i$. It is well-known that in this case, we have the following convergence theorems:
Law of large numbers: $\frac{S_n}n \to 0$ almost surely.
Central limit theorem: $\frac{S_n}{\sqrt{n}}$ converges to a $\sim \mathcal N(0,1)$ random variable in distribution.
Does anything interesting happen, if we normalize $S_n$ differently? For instance, is it possible to normalize in a way that we obtain convergence of $\frac{S_n}{c_n}$ to a non-constant random variable (with $\sqrt{n} \ll c_n\ll n$, I'd imagine) or something like that?
This really is just a random thought... If there is a fundamental reason for why we really only care about the above two normalizations, then I'd be happy to know what these reasons are (apart from the fact that these normalizations are particularly natural).