If the covariance is the $2\times2$ identity matrix, then the density is $e^{−(x^2+y^2)/2}$ multiplied by a suitable normalizing constant. If $\begin{bmatrix} X \\ Y \end{bmatrix}$ is a random vector with this distribution, then you rotate that random vector by multiplying on the left by a typical $2\times 2$ orthogonal matrix: $ G \begin{bmatrix} X \\ Y \end{bmatrix} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}\begin{bmatrix} X \\ Y \end{bmatrix}. $
If the question is how to "rotate" the probability distribution, then asnwer is that it's invariant under rotations about the origin since it depends on $x$ and $y$ only through the distance $\sqrt{x^2+y^2}$ from the origin to $(x,y)$.
If you multiply on the left by a $k\times2$ matrix $G$, you have $ \mathbb{E}\left(G\begin{bmatrix} X \\ Y \end{bmatrix}\right) = G\mathbb{E}\begin{bmatrix} X \\ Y \end{bmatrix} $ and $ \operatorname{var}\left( G \begin{bmatrix} X \\ Y \end{bmatrix} \right) = G\left(\operatorname{var}\begin{bmatrix} X \\ Y \end{bmatrix}\right)G^T, $ a $k\times k$ matrix. If the variance in the middle is the $2\times2$ identity matrix and $G$ is the $2\times 2$ orthogonal matrix given above, then it's easy to see that the variance is $ GG^T $ and that is just the $2\times 2$ identity matrix. The only fact you need after that is that if you multiply a multivariate normal random vector by a matrix, what you get is still multivariate normal. I'll leave the proof of that as an exercise.