If you have a probability density $p(x)$, then I would actually write: "The expected value of $f(x)$ is equal to the sum of $f(x)$ weighted by the infinitesimal probability that $X\in[x,x+dx]$ which is $p(x)dx$"
So there are two things going on here. First $p(x)$ is a probability density, in that $p(x)$ has units of probability per unit $x$ ($x$ could be length, age, whatever). In other words $p(x)$ by itself is NOT a probability in the usual sense, it needs to be multiplied by the $dx$ term to become an infinitesimal probability. This is sort of a wishy-washy physics explanation of probability density.
Next, when writing $E[f(X)]$ one has two choices. The first choice is to express $E[f(X)]$ in terms of the probability density of $X$, which is to say $p(x)$. In this sense, we have $E[f(X)]=\int f(x)p(x)dx$. In other words, we are first querying the domain of $f$ as a random variable and then applying $f$. The other option is to write the expected value in terms of the density of $f(X)$ as a random variable. In this case we would define $g(y)$ as the probability that $f(x)\in[y,y+dy]$. In this case we are querying the range of $f$ as a random variable, that inherits it's randomness from the random domain $X$. So for any event $A$, $P(f(X)\in A)=P(X\in f^{-1}(A))$ where the right-hand-side is a known quantity since we know the distribution of $X$. To wit, we get $E[f(X)]=\int yg(y)dy$.