Suppose we are given a set of points $(x_i, y_i)\in\mathbb{R}^2$ and are told that they are drawn from a normal (Gaussian) distribution. It is a simple matter in that case to find the mean $(\mu_x,\mu_y)$ of the distribution: $(\langle x \rangle, \langle y \rangle)$, where $\langle x \rangle=\frac{1}{N}\sum_i x_i$ and similarly for $\langle y \rangle$, is an unbiased estimator.
Now suppose, instead, that each point is randomly presented as either $(x_i,y_i)$ or $(y_i, x_i)$, with equal probability. Is there still a comparably simple way to estimate the mean of the original distribution? (Assume $\mu_x \le \mu_y$.) This is equivalent to the case where the points are drawn from a mixture of two Gaussians, where one is constrained to be the reflection of the other across the line $y=x$. However, one might hope that the symmetry of the problem leads to some simplification from the general case of two Gaussians.
Note, by the way, that taking $\mu_x=\langle \min(x, y)\rangle$ and $\mu_y = \langle \max (x,y) \rangle$ does not work: this is adequate only when the individual Gaussian distributions are well-separated by the line $y=x$. Otherwise, it introduces a systematic bias away from that line.