I am wondering why the two equations are the same? Intuitively it makes sense to me, but I am hoping to have steps to prove how they are the same in calculations or descriptive ways.
$$X\sim N(\mu,\sigma^2)\iff X\sim \mu+\sigma N(0,1)$$
I am wondering why the two equations are the same? Intuitively it makes sense to me, but I am hoping to have steps to prove how they are the same in calculations or descriptive ways.
$$X\sim N(\mu,\sigma^2)\iff X\sim \mu+\sigma N(0,1)$$
If you add a constant (let's say $a$) to a set of numbers, the mean of this set becomes $old\_mean + a$, i.e., gets shifted by that constant. When you multiply a set of numbers by a constant $c$, the new standard deviation becomes $old\_std\_dev*c$, i.e., gets scaled up by the constant.