I need to get familiar with probabilities over separable Hilbert spaces. It's a bit difficult for me to think of something as a probability distribution over a space of function: I can't imagine how it could be defined, but in terms of finite-dimensional objects.
For example, let's consider the only probability distribution over a function space that I know of: the Gaussian Process. We say that
$f\sim GP(\mu(x),c(x,x'))$
if, for any finite set $X_n=\{x_1, \dots, x_n\}$, $\mathbf{f}=(f(x_1),\dots,f(x_n))$ is a random vector having the multivariate normal distribution $\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$ with $\boldsymbol{\mu}=(\mu(x_1),\dots,\mu(_n))$ and $\boldsymbol{\Sigma}$ is a symmetric positive definite matrix with components $\Sigma_{ij}=c(x_i,x_j)$. Are there other practical examples of probability laws over function spaces? Are they always defined in terms of probability distributions of finite-dimensional vectors/matrices, or is it possible to define them directly in terms of infinite-dimensional objects?
Also, does it makes sense to talk about a CDF for infinite-dimensional objects? It seems to me that we can at least talk of a Cumulative Distribution Functional. For example, it could make sense to compute
$P(F(x) where $F(x)$ is my random function and $g(x)$ is any element of the separable Hilbert space, by interpreting the above inequality in the natural sense. I.e., we ask ourselves which is the probability that $F$ is less than $g$ for all $x\in X$ where $X$ is the domain of the functions in our separable Hilbert space. Then the role of the CDF in finite-dimensional probability spaces would be taken by the functional $\mathcal{F}(g)=P(F(x)