I have a prediction $f(x)$ of some continuous process variable, based on an input variable $x$ (think: location). The prediction is incorrect, with the error being normal distributed with expected value $\mu$ and standard deviation $\sigma$.
Hence, the probability density function of $f(x)$ should be
$\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{((f(x)-\mu)/\sigma)^2}{2}}$
Is this correct? (No it is not, see answer below)
Now, I have a measurement $m$ of the process variable, based on an unknown input variable $x_m$. $m$ is assumed to be correct, but quantized to integral numbers.
Given a set of $x_i$ together with their predictions $f(x_i)$, how can I compute a probability that $x_m$ was in the vicinity of $x_i$?
I apologize if the question makes no sense. Comments that help me improve the question are appreciated!