As you already pointed out, the assertion holds if $h(x) = 1_C(x)$ is the indicator function of a measurable set $C$. By the linearity of the (conditional) expectation, this implies that
$$\mathbb{E}(h(X) \mid Y=y) = \int h(x) \mathcal{L}_{X \mid Y}(dx \mid y)$$
holds for any simple function $h$, i.e. any function $h$ of the form
$$h(x) = \sum_{j=1}^n a_j 1_{C_j}(x).$$
Now if $h \geq 0$ is a measurable function, then there exists a sequence of simple functions $(h_j)_{j \in \mathbb{N}}$ such that $h_j \geq 0$, $h_j(x) \uparrow h(x)$ as $j \to \infty$ for all $x$. Using the monotone convergence theorem (MCT) and the fact we already know that the assertion holds for simple functions we get
$$\begin{align*} \mathbb{E}(h(X) \mid Y) &\stackrel{\text{MCT}}{=} \lim_{j \to \infty} \mathbb{E}(h_j(X) \mid Y) \\ &= \lim_{j \to \infty} \int h_j(x) \mathcal{L}_{X \mid Y}(dx \mid Y) \\ &\stackrel{\text{MCT}}{=} \int h(x) \, \mathcal{L}_{X \mid Y}(dx \mid Y) \end{align*}$$
which shows that
$$\mathbb{E}(h(X) \mid Y=y) = \int h(x) \, \mathcal{L}_{X \mid Y}(dx \mid y).$$
Finally, if $h$ is a measurable function such that $h(X) \in L^1$, then we can write $h=h^+-h^-$ and apply the first part of the proof to positive part $h^+ \geq 0$ and negative part $h^- \geq 0$ and use again the linearity of the integral.