To find the expectation value and variance of $\mu$ given $X$, you need the conditional probability for $\mu$ given $X$, which is:
$p(\mu|X)=\frac{p(\mu,X)}{p(X)}\;,$
with
$p(X)=\int_{-\infty}^\infty p(\mu,X)\mathrm d\mu\;.$
For the given distributions, we have
$p(\mu,X)=\frac1{\sqrt{2\pi}}\mathrm e^{-\mu^2/2}\prod_{i=1}^n\frac1{\sqrt{2\pi}\sigma}\mathrm e^{-(X_i-\mu)^2/(2\sigma^2)}\;.$
We don't need the constant factors since they drop out when we calculate expectation values. With $S_0:=n$ and $S_k:=\sum\limits_{i=1}^n(X_i)^k$, this becomes
$ p(\mu,X)\propto \mathrm e^{-\mu^2/2} \mathrm e^{-(\mu^2S_0-2\mu S_1+S_2)/(2\sigma^2)}\;,$
and with $T:=S_0+\sigma^2=n+\sigma^2$ we can rewrite this as
$ \begin{eqnarray} p(\mu,X) &\propto& \mathrm e^{-(\mu^2T-2\mu S_1+S_2)/(2\sigma^2)} \\ &=& \mathrm e^{-T(\mu-S_1/T)^2/(2\sigma^2)}\mathrm e^{-(S_2-S_1^2/T)/(2\sigma^2)} \\ &\propto& \mathrm e^{-T(\mu-S_1/T)^2/(2\sigma^2)}\;, \end{eqnarray} $
where again we don't need the second factor because it doesn't depend on $\mu$.
So we have a new normal distribution for $\mu$ given $X$, and we can read off the mean $S_1/T$ and the variance $\sigma^2/T$ without having to carry out any integrations.
Analyzing these results is a good exercise in Bayesianology. If we write
$\frac{S_1}T=\frac{n\bar X-0}{n+\sigma^2}=\frac{n\sigma^{-2}\cdot\bar X-1\cdot0}{n\sigma^{-2}+1}\;,$
$\frac{T}{\sigma^2}=\frac{n+\sigma^2}{\sigma^2}=1+n\sigma^{-2}\;,$
we can see that the precision of the distribution of the mean has increased by the number of measurements times the precision of the measurements, and the new expectation value for the mean is the weighted average of the prior expectation value and the mean of the measurements, weighted by the respective precisions. Thus, things are nicely linear; each measurement adds its precision to the overall precision and its value to the sum of the values, weighted with its precision, and the prior distribution can be treated as if it were an initial measurement.