You're asking about the optimal value of the random assignment problem, which has received quite a lot of study in the past couple of decades. Most of the work has been with uniform$[0,1]$ or exponential$(1)$ random variables, though, and with minimums. Perhaps this is because the uniform and exponential cases have proved easiest to analyze. While I don't think there has been much explicit work on the case in which costs are $N(0,1)$, we can piece some of the results together to say that the expected value you're looking for is $E[\Phi_n] \sim A \sqrt{\log n},$ for some constant $A$.
Define $L_n = \min_{\pi \in S_n} \sum_i C_{i,\pi(i)}.$ Frenk, van Houweninge, and Rinnooy Kan ("Order statistics and the linear assignment problem," Computing 39 (2), 165-174, 1987) proved that under some mild conditions on the cumulative distribution function $F$ that, for some constant $C$ that depends on $F$, $E[L_n] \sim C n F^{-1}(1/n).$ Since the normal pdf is symmetric we can apply asymptotic estimates for $F^{-1}(1-1/n)$ (i.e., the maximum rather than the minimum) so that when costs are $N(0,1)$, for some constant $A$, $E[\Phi_n] \sim A \sqrt{\log n},$ which is not far from what your numerical evidence says. (If $C$ turns out to be $1$ in the normal case, then we would have $A = \sqrt{2}$.)
For the exponential
$(1)$ distribution, Aldous ("
The $\zeta(2)$ limit in the random assignment problem,"
Random Structures and Algorithms 18 (4), 381-418, 2001), proved that
$\lim_{n \to \infty} E[L_n] = \zeta(2) = \frac{\pi^2}{6}.$ Aldous notes that his result holds for distributions with the same density at
$0$ (assuming this exists and is positive), so we have the same limiting value if costs are uniform
$[0,1]$. This was followed a few years later by independent work from Linusson and Wästlund ("
A proof of Parisi's conjecture on the random assignment problem,"
Probability Theory and Related Fields 128 (3), 419-400, 2004) and Nair, Prabhakar, and Sharma ("
Proofs of the Parisi and Coppersmith-Sorkin random assignment conjectures,"
Random Structures and Algorithms 27 (4), 413-444, 2005) proving that in the exponential
$(1)$ case
$E[L_n] = \sum_{i=1}^n \frac{1}{i^2}.$
For more, see the recent survey paper by Krokhmal and Pardalos ("
Random assignment problems,"
European Journal of Operational Research 194, 1-17, 2009). There are some results given there on variances. For instance, in the exponential
$(1)$ case, it appears that
$\sigma^2(L_n) = \frac{4\zeta(2) - 4\zeta(3)}{n} + O(n^{-2}).$