There really is no need to evaluate all those integrals in this case. Remember that if $g(x)$ is a nonnegative function with finite integral, then $h(x) = g(x)/\int_{-\infty}^{\infty} g(x) \mathrm dx$ is a probability density function (because $h(x)$ is non-negative and the "area under the curve $h(x)$" is $1$). (Exercise: show that this idea works for nonpositive functions too!). Armed with this and visualizing the joint density function $f(x,y)$ as a surface above the $x$-$y$ plane, we see that given $X = x_0$, the conditional density of $Y$ is proportional to $f(x_0, y) = \exp(-x_0 - y),~~ x_0 < y < \infty$, where the constant of proportionality is $\frac{1}{\int_{-\infty}^{\infty} f(x_0, y) \mathrm dy} = \frac{1}{\int_{x_0}^{\infty} f(x_0, y) \mathrm dy} = \frac{1}{f_X(x_0)}$ and "normalizes" (unitizes?) the area under the curveto $1$. Now, instead of evaluating this integral to get the exact conditional density function, we can argue that the "shape" of the conditional density is an exponentially decaying function of $y$, and so the conditional density of $Y$ given $X$ is just an exponential density (with mean and variance equal to $1$) that has been displaced $x_0$ to the right. Hence, $E[Y\mid X = x_0] = 1 + x_0$, and so $E[Y \mid X] = 1 + X$. The conditional mean-square error (MSE) is just the variance ($1$) of this conditional density, and since this does not depend on the value of $X$, the unconditional MSE is also $1$.
Note that in this instance, the MMSE estimator is the same as the linear MMSE estimator.