This is an elementary result in the setting of measure theory. Indeed, $ {\rm Var}(X):={\rm E}[X - {\rm E}(X)]^2 = \int_\Omega {(X - {\rm E}(X))^2 \,{\rm dP}} , $ so since $(X- {\rm E}(X))^2$ is a nonnegative random variable, it follows from $ \int_\Omega {(X - {\rm E}(X))^2 \,{\rm dP}} = 0 $ that $(X- {\rm E}(X))^2 = 0$ with probability $1$, hence also ${\rm P}(X = {\rm E}(X)) = 1$.
EDIT: While the above proof is elementary in the setting of measure theory, quite obviously this is not the intended approach here. Let me elaborate on Henry's answer (simple approach). Letting $Y=(X-{\rm E}(X))^2$, we want to show that ${\rm E}(Y)=0$ (that is ${\rm Var}(X)=0$) implies ${\rm P}(X={\rm E}(X))=1$. We now show that for any nonnegative random variable $Y$, the condition ${\rm E}(Y)=0$ implies ${\rm P}(Y=0)=1$, thus yielding the desired result noting that $Y=0$ iff $X={\rm E}(X)$, when $Y=(X-{\rm E}(X))^2$. So, let $Y$ be a nonnegative random variable with ${\rm E}(Y)=0$. Fix $a > 0$. Noting that the condition ${\rm P}(Y > a) = b$ for some $b > 0$ implies ${\rm E}(Y) \geq ab > 0$ (indeed, ${\rm E}(Y) = \int_{[0,\infty )} {yF(dy)} \ge \int_{(a,\infty )} {yF(dy)} \ge a\int_{(a,\infty )} {F(dy)} = a{\rm P}(Y > a) = ab$, where $F$ is the distribution of $Y$), we conclude that ${\rm P}(Y > a) = 0$, or ${\rm P}(Y \leq a)=1$ (for any a > 0). On the other hand, ${\rm P}(Y \leq a)=0$ for all $a < 0$. Hence the distribution function of $Y$ is that of the constant random variable $0$ (recall that a distribution function is, in particular, right-continuous). Thus ${\rm P}(Y \leq 0) = 1$, hence also ${\rm P}(Y = 0) = 1$.