When we have an estimator $T$ of some parameter $\theta$, when one defines the so-called asymptotic variance $V$ what is actually meant is the number such that $\sqrt n (T - \theta) \stackrel{d}{\longrightarrow} N(0, V)$ (more generally one can replace $\sqrt n$ with a general $k_n$ such that you are converging in distribution to a normal; see e.g. Casella and Berger). Note that this lets us talk about the "asymptotic variance" of estimators that don't even necessarily have an have a finite first or second moment.
For the normal distribution, the asymptotic variance of $\sqrt n (\bar X - \mu)$ under the above definition is $\sigma^2$, so a large sample approximation of the variance of $\bar X$ is $\sigma^2 / n$ - this happens to also the exact variance of $\bar X$. If $M_n$ is the sample median, we can show that $\sqrt n (M_n - \mu) \stackrel{d}{\longrightarrow} N\left(0,\frac{1}{4f(\mu)^2}\right) \qquad (\dagger)$
and so the asymptotic variance of $\sqrt n (M_n - \mu)$ is $\frac{1}{4 f(\mu)^2}$ - $f$ here is the density of the underlying distribution. This result holds more generally than just for normal iid random variables, so I'm writing it in the more general form with the density $f$ unspecified.
The usual comparison between two asymptotic variances is the asymptotic relative efficiency (ARE), which is the ratio of the asymptotic variance of $\bar X$ to the asymptotic variance of $M_n$, which is $4 \sigma^2 f(\mu)^2 = \frac {4 \sigma ^ 2}{2\pi \sigma^2} = \frac 2 \pi \approx .64$ so for the normal distribution $\bar X$ is doing better in terms of asymptotic variance than $M_n$. This doesn't hold for all distributions.
I'm feeling a bit lazy to write out how one arrives at $(\dagger)$, so I'll leave that to someone else if they want to slog through that.
As far as linking this to the Cramer-Rao lower bound: estimators that achieve the Cramer-Rao lower bound asymptotically, which $\bar X$ does in this case, are called efficient. Estimators that acheive the Cramer-Rao lower bound are optimal in some sense - in terms of asymptotic variance you can actually do better than the Cramer-Rao lower bound, but the estimators that beat the Cramer-Rao lower bound have some subtle problems associated with them, so typically if we have an estimator that achieves the CR lower bound we are satisfied with it so far as the large sample properties of the estimator are concerned.