Given the definition of conditional expectation as E$[X|B] = \frac{E[1(B) \cdot X]}{P(B)}$, and understanding $1(B)$ as an indicator function that returns $1 (0)$ when $B$ is true (false), it would seem $E[1(B)\cdot X]$ takes the expected value of members of $X$ where $B$ is true. Why the further division by $P(B)$? Intuitive as well as formal explanation requested.
E.g., suppose we have:
$(S,P)$: $(1,3)$ $(1,4)$ $(0,3)$ $(0,2)$ $(0,1)$ $(0,0)$ $(1,1)$ $(1,2)$ $(1,3)$ $(0,2)$
$P(S = 1) = 0.5$, $E[1(S=1) \cdot P] = \frac{13}{5} \approx 2.6 $ (where $1[x]$ is an indicator function that flips to $1$ if $S = 1$ and $0$ if $S != 1$).
Thus $E[P|S = 1] = \frac{E[1(S=1) \cdot P]}{P(S = 1)} \approx 5.2$? What am I doing wrong here?