Short version:
I would like to calculate the expected value if you apply the sigmoid function $\frac{1}{1+e^{-x}}$ to a normal distribution with expected value $\mu$ and standard deviation $\sigma$.
If I'm correct this corresponds to the following integral:
$\int_{-\infty}^\infty \frac{1}{1+e^{-x}} \frac{1}{\sigma\sqrt{2\pi}}\ e^{ -\frac{(x-\mu)^2}{2\sigma^2} } dx$
However, I can't solve this integral. I've tried manually, with Maple and with Wolfram|Alpha, but didn't get anywhere.
Some background info (why I want to do this):
Sigmoid functions are used in artificial neural networks as an activation function, mapping a value of $(-\infty,\infty)$ to $(0,1)$. Often this value is used directly in further calculations but sometimes (e.g. in RBM's) it's first stochastically rounded to a 0 or a 1, with the probabililty of a 1 being that value. The stochasticity helps the learning, but is sometimes not desired when you finally use the network. Just using the normal non-stochastic methods on a network that you trained stochastically doesn't work though. It changes the expected result, because (in short):
$\operatorname{E}[S(X)] \neq S(\operatorname{E}[X])$
for most X. However, if you approximate X as a normal distribution and could somehow calculate this expected value, you could eliminate most of the bias. That's what I'm trying to do.