After more careful reading of your question, I think that what you
really want is a proof that $R\cos(Q)$ and $R\sin(Q+a)$ are jointly
normal random variables and have the bivariate joint normal density
that you are familiar with. The other stuff, means, variances,
covariance etc you have been able to work out for yourself. As
@yohBS points out in his comment,
$$R\sin(Q+a) = R\cos(Q)\sin(a)+R\sin(Q)\cos(a)$$
is a linear combination of $R\cos(Q)$ and $R\sin(Q)$ which you
know are independent and therefore jointly normal random variables.
Edited in response to Didier Piau's comments
Many people define jointly normal random variables in terms of
linear functionals: $X$ is a vector of jointly normal random
variables (normal vector for short) if and only if every linear
functional of $X$ yields a normal random variable. Affine functionals
also yield normal random variables. (Constants are honorary
normal random variables with variance $0$).
Thus, if $X$ is a normal vector, then
$Y = a_0+\sum a_iX_i$ and $Z = b_0+\sum b_iX_i$ are normal random variables,
and since
$$\alpha Y + \beta Z
= (\alpha a_0+\beta b_0) + \sum_i (\alpha a_i+\beta b_i)X_i$$
is a normal random variable for all $\alpha$ and $\beta$,
$(Y,Z)$ is a normal vector.
More generally, if $X$ is a normal vector, then $AX+B$ is a
normal vector for any matrix $A$ and real vector $B$.
If $X$ is a single continuous random variable with pdf $f_X(x)$,
then for $a \neq 0$, $Y = aX+b$ is a continuous random variable
with pdf $f_Y(y) = \frac{1}{|a|}f_X\left(\frac{y-b}{a}\right)$.
Similarly, for a random $n$-vector $X$ of jointly continuous
random variables with joint pdf $f_X(x)$,
invertible $n\times n$ matrix $A$ and real vector $B$, $Y = AX+B$
is a $n$-vector of jointly continuous random variables with joint pdf
$$f_Y(y) = \frac{1}{|\text{det}(A)|}f_X(A^{-1}(y-B))$$
where $\text{det}(A)$, the determinant of $A$ is coming from
the Jacobian of the linear transformation.
For a random $n$-vector $X$, the mean vector $E[X]$ is a real vector
with $i$-th coordinate $E[X_i]$ and the $n\times n$
covariance matrix $S$
is $E[(X-E[X])(X-E[X])^T]$ whose $\{i,j\}$-th entry is
$\text{cov}(X_i,X_j) = E[(X_i-E[X_i])(X_j-E[X_j])]$. Suppose now
that the $X_i$ are independent standard normal random variables
and so the joint pdf of the $X_i$'s is
$$
f_X(x) = \prod_{i=1}^n f_{X_i}(x_i)
= \prod_{i=1}^n \frac{1}{\sqrt{2\pi}}\exp(-x_i^2/2)
= (2\pi)^{-n/2}\exp\left(-\frac{1}{2}x^Tx\right).
$$
Then, $Y = AX+B$ is a normal vector, and it has mean vector
$$E[Y] = E[AX+B] = AE[X]+B = B,$$ and covariance matrix
$$S_Y = E[(Y-B)(Y-B)^T]= E[AXX^TA^T] = AE[XX^T]A^T = AA^T$$
since $E[XX^T] = I$, the identity matrix. Furthermore,
the joint pdf of the $Y_i$'s is
$$\begin{align*}
f_Y(y) &= \frac{1}{|\text{det}(A)|}f_X(A^{-1}(y-B))\\
&= \frac{1}{(2\pi)^{n/2}|\text{det}(A)|}
\exp\left(-\frac{1}{2}(A^{-1}(y-B))^TA^{-1}(y-B)\right)\\
&=\frac{1}{(2\pi)^{n/2}|\text{det}(A)|}
\exp\left(-\frac{1}{2}(y-B)^T(A^{-1})^TA^{-1}(y-B)\right)\\
&= \frac{1}{(2\pi)^{n/2}|\text{det}(A)|^{1/2}|\text{det}(A^T)|^{1/2}}
\exp\left(-\frac{1}{2}(y-B)^T(AA^T)^{-1}(y-B)\right)\\
&= \frac{1}{(2\pi)^{n/2}|\text{det}(AA^T)|^{1/2}}
\exp\left(-\frac{1}{2}(y-B)^T(AA^T)^{-1}(y-B)\right)\\
&= \frac{1}{(2\pi)^{n/2}|\text{det}(S_Y)|^{1/2}}
\exp\left(-\frac{1}{2}(y-B)^TS_Y^{-1}(y-B)\right)
\end{align*}
$$
In this world view, $[R\cos(Q), R\sin(Q+a)]^T$ is a normal
vector since it is obtained by linear transformation from
$[R\cos(Q), R\sin(Q)]^T$ which is a normal vector of standard
normal random variables, and so the joint pdf of
$R\cos(Q)$ and $R\sin(Q+a)$ is a bivariate normal density
that can be worked out from the general form above.