I'm looking at Gaussian distributions in infinite-dimensional Hilbert space, and the sources I've seen so far say that the covariance matrix has to be of trace class (i.e. the trace must be finite). Amongst other things this condition rules out the canonical $\mathcal{N}(0,I_{\infty})$ Gaussian distribution.
The argument I've seen for ruling out the canonical Gaussian is that if one wants the projections of this distribution onto finite subspaces to be Gaussian (which of course we do), and one also requires every open $\epsilon$-ball in the Hilbert space to have non-zero probability measure, then one can derive a contradiction via the fact that the measure of the projection of any particular $\epsilon$-ball onto a finite subspace is greater than the measure of the original $\epsilon$-ball, while the measure of the projection goes to zero as the dimension increases.
So far so good, but why do we insist that the measure of every $\epsilon$-ball is non-zero? The natural limit of the sequence of canonical Gaussians it seems to me is a point mass (Dirac $\delta$-distribution) at infinity in the Hilbert space, and so if we define it as such doesn't the contradiction go away? And then don't we get back the useful property of being able to whiten Gaussian RVs in the Hilbert space, which is frequently useful?
I'm aware that "Here be dragons" - what problems am I not anticipating?
Edit:
Okay - I think that I have a partial answer to the point regarding the value of allowing the canonical Gaussian in infinite dimensional space. Everything that Nate says in his reply is fine - my real point (and perhaps I'm guilty of not being very clear about this) was that we can see where the sequence of finite canonical Gaussians is heading to, so would it be useful to let it in? On the surface it looks like it might be, but on reflection I think that it isn't even if we were to disregard the serious issues related to extending Hilbert space.
Let's see first where the canonical Gaussians are heading to: In fact it is not what I hand-waved as a point mass at infinity, but to a distribution in the surface of an infinite dimensional hypersphere with infinite radius (somehow I managed to convince myself that the two were equivalent, but they can't be as this contradicts the strong law of large numbers). I can see that this is problematic by itself, but let's run with it for a bit.
Now, the above follows from several applications of the strong law of large numbers - if we have a vector of iid standard Gaussian RV's $X_{i} \overset{iid}{\sim} \mathcal{N}(0,1)$ then the squares of the vector components are $\chi^{2}$ distributed with expected value $1$, so, as $d \rightarrow \infty$, $1/d \sum_{i=1}^{d}X_{i}^2 = 1$ with probability $1$. That is if $X = (X_{1},X_{2}, \ldots, X_{d})$ the norm of $X$, $\|X\| \rightarrow \sqrt{d}$ a.s. as $d \rightarrow \infty$.
Likewise one can show with SLLN that the mean vector is the zero vector (and so whatever the infinite dimensional distribution looks like, it is symmetric about the origin).
Finally, one can show with SLLN that if $X$ and X' are any two such vectors then \left\langle \frac{X}{\|X\|},\frac{X'}{\|X'\|} \right\rangle \rightarrow 0 as $d \rightarrow \infty$. Since $X$ is the zero vector with probability zero, we must have (with probability $1$) that each pair of vectors is orthogonal in the limit.
These are all pretty well-known aspects of the curse of dimensionality, and taken together they imply that points end up mutually orthogonal in the surface of a hypersphere about the origin.
As to utility, why did I want to whiten Gaussians in this infinite dimensional space in the first place? Well naively I thought that if the variables are iid Gaussian, maybe I can show some nice concentration effects which would help me solve a problem I'm working on (hasty generalization is my besetting sin...). Of course there is concentration in this situation but it isn't of a type I can usefully employ.
Thanks, Nate, for your comments and answer (which I've accepted).