The usual notation for a probability measure on $(\Omega,\mathcal F)$ is $P$ and one rather uses $\mu$ to denote the distribution of a random variable $X:\Omega\to\mathbb R$, that is, a probability measure on $(\mathbb R,\mathcal B(\mathbb R))$. With these conventions, recall that $\mu$ is defined as the unique measure such that, for every $B$ in $\mathcal B(\mathbb R)$, $\mu(B)=P(X^{-1}(B))$ (since the hypothesis that $X$ is a random variable ensures exactly that $A=X^{-1}(B)$ is in $\mathcal F$, hence that $P(A)$ is well defined).
Thus, the definition of $\mu$ is to ask that, for every $u=\mathbf 1_B$, $ \int_\Omega u(X) \mathrm dP=\int\limits_{\mathbb R}u(x)\mathrm d\mu(x)\tag{$\ast$}. $ It is a standard result, explained in every decent introductory textbook on probability theory (with measure) that $(\ast)$ holds in fact for every measurable function $u:\mathbb R\to\mathbb R$ which is integrable with respect to $\mu$.
In the proof usually presented, one considers the class $C$ of measurable functions $u:\mathbb R\to\mathbb R$ such that $(\ast)$ holds. One first notes that $C$ contains every $\mathbf 1_B$ with $B$ in $\mathcal F$, then, by linearity, that $C$ contains every simple nonnegative function, then, by supremum, that $C$ contains every positive integrable function, and finally, again by linearity, that $C$ contains every integrable function.
Your post is concerned with the case where $u:x\mapsto x$. One sees that $(\ast)$ holds as soon as one side of the identity is finite, that is, as soon as $X$ is integrable for $P$ or, equivalently, as soon as $u:x\mapsto x$ is integrable for $\mu$. Thus, when $(\ast)$ holds and when $\mu$ has density $f$ with respect to $\lambda$, $ \mathrm E(X)=\int_\Omega X(\omega) \mathrm dP(\omega)=\int\limits_{\mathbb R}x\mathrm d\mu(x)=\int\limits_{\mathbb R}xf(x)\mathrm d\lambda(x). $