Let $X_1,\dots,X_n$ be independent random variables, each normally distributed as $X_k\sim N(m_k;\sigma^2_k)$. Let $S_n = \sum_{k=1}^n X_k - m_k$ and $T_n = \frac{S_n}{\sqrt{\operatorname{Var}(S_n)}}$.
We wish to show that $\lim_{n\to\infty} T_n \sim N(0;1)$.
I've laboriously proven that $T_n \sim N(m; \sigma^2)$ where $m=\sum_k m_k$ and $\sigma^2=\sum_k \sigma^2_k$. So now I'm wondering if I'm supposed to assume that the $\lim_{n\to\infty}m=0$ and similarly $\sigma^2\to 1$, which I don't really see a strong justification for.
Is this what I'm supposed to do? What's the justification for it?
(Note this is exercise 14.31.7 in Apostol's Calculus II. Looking online it seems like some other places don't define CLT as it approaches standard normal, just normal distribution of some mean and variance. So this could be an unusual definition he's using.)