I have a question regarding the following theorem:
Discrete random variables $X$ and $Y$ on $(\Omega,\mathcal{F},\mathbb P)$ are independent if and only if
$\mathbb E(g(X)h(Y))=\mathbb E(g(X))\mathbb E(h(Y))$
for all functions $g,h\colon \mathbb R\to \mathbb R$ for which the last two expectations exist.
The proof goes as follows:
The necessity of the theorem follows just as in the proof of theorem //. To prove sufficiency, let $a,b\in \mathbb R$ and define $g$ and $h$ by
$\begin{align}g(x)=\begin{cases}1&if\ x=a\\0&if \ x\neq a,\end{cases} && g(x)=\begin{cases}1&if\ x=b\\0&if \ x\neq b,\end{cases}\end{align}$
Then $\mathbb E(g(X)h(Y))=\mathbb P(X=a,Y=b)$
and
$\mathbb E(g(X))\mathbb E(h(Y))=\mathbb P(X=a)\mathbb P(Y=b)$
giving that $p_{X,Y}(a,b)=p_X(a)p_Y(b)$.
Now, in my eyes, the proof is giving an example of the theorem instead a general proof for all functions... So, instead of working with a random functions $g$ and $h$, we work with two specific functions $g$ and $h$ and show that the theorem holds. Can someone explain to me how this is a proof for the theorem?
To be clear: I am only interested in the "sufficiency" part of the proof!