I am considering this in the sense that I know according to the central limit theorem, for an i.i.d. process $X_n$ (with mean $m$ and variance $σ^2$), the corresponding normalized sum process is: $ Z_n = \frac{S_n-nm}{σ\sqrt{n}} $ with $S_n = X_1+X_2+ . . . + X_n$. I know that this does indeed converge in distribution to a zero-mean unit-variance Gaussian. My question is why does this not happen for the Poisson Process. I am speaking of the Poisson process derived as a limit of the Binomial counting process, where $n$, the number of infinitesimal intervals, went to $∞$, and $p$, the success probability, went to $0$, while their product $np$ stayed constant at $\lambda t$. I believe if CLT had worked here, we would have obtained a Gaussian $N(t)$ instead of a discrete $N(t)$.
For the purposes of exploring this problem, if we consider the Taylor expansion: $E\left[\exp\left(-\frac{j \omega}{\sigma \sqrt{n}}(X_1-m)\right)\right]=\displaystyle\sum\limits_{k=0}^∞ \frac{1}{k!}\left(-\frac{j \omega}{\sigma \sqrt{n}}\right)^k E \left[(X_1-m)^k\right]$
I desire to examine higher-order the terms when $X_1$ is $\mathrm{Bernoulli}\left(\frac{\lambda t}{n}\right)$ as in the Poisson Process. Can we say these term are really neglible when compared to the terms for $k=0,1,2$?
Any help would be greatly appreciated. Thanks!