2
$\begingroup$

(This is related to this question)

$Q \in \mathbb{R}^{n\times k}$ is a random matrix where $k and the columns of $Q$ are orthogonal (i.e. $Q^T Q = I$). To examine $E(QQ^T)$, I conducted monte carlo simulations (using matlab):

[Q R] = qr(randn(n,k),0); 

In other words, I just sampled a $\mathbb{R}^{n\times k}$ matrix from a standard gaussian, then did QR decomposition on it and assumed $Q$ is uniformly distributed in the space where $Q^TQ=I$. Joriki's answer and my simulations aligned so I assume there's nothing majorly wrong with how I obtained samples.

I have two questions (in order of importance)

  1. How does one prove that the $Q$ sampled as above is uniformly distributed in the space where $Q^TQ=I$?
  2. Are there more efficient methods of sampling orthogonal $Q$?
  • 0
    @mike I think it might be easier if you replied in an answer.2012-04-30

1 Answers 1

4

The claim is that if you apply gram schmidt to the columns of a matrix whose entries are i.i.d normal the resulting distribution is Haar measure on orthogonal matrices. Gram-Schmidt gives you an orthogonal matrix so invariance of the distribution under orthogonals is the issue. It is true because the orthogonals preserve i.i.d normals and also all elements of the gram-schmidt procedure, i.e. inner products. Suppose you have a $2 \times 2$ matrix $X$ , the general case being the same, with columns $x_1, x_2$. Gram schmidt gives an orthogonal matrix with columns $\frac {x_1}{\Vert x_1 \Vert},\frac {x_2 - \langle x_1,x_2 \rangle \frac {x_1}{\Vert x_1 \Vert}}{\Vert \text {the numerator }\Vert }$. Call that $X_{GS}$. If O is an orthogonal matrix $OX$ has columns $Ox_1, Ox_2$, and when you apply gram schmidt to it you get $\frac {Ox_1}{\Vert Ox_1 \Vert},\frac {Ox_2 - \langle Ox_1,Ox_2 \rangle \frac {Ox_1}{\Vert x_1 \Vert}}{\Vert O\text {the numerator }\Vert }$. Using orthogonality of $O$ once this has the same distribution as $X_{GS}$ since $Ox_i$ are i.i.d. with the same distribution as $x_1,x_2$,and that determines the distribution, and using it again the matrix is equal to $\frac {Ox_1}{\Vert x_1 \Vert},\frac {Ox_2 - \langle x_1,x_2 \rangle \frac {Ox_1}{\Vert x_1 \Vert}}{\Vert \text {the numerator }\Vert }$ since norms and inner products are the same. The last expression is $OX_{GS}$, and this shows that the distribution of $X_{GS}$ is invariant under orthogonal matrices.

  • 1
    @Jason: "Haar measure" is in essence a fancy way of saying that the distribution is invariant under the relevant symmetry operations, in this case orthogonal transformations.2012-05-01