3
$\begingroup$

Not sure if "linear transformation" is the correct terminology, but...

Let $X$ be a random variable with a normal distribution $f(x)$ with mean $\mu_{X}$ and standard deviation $\sigma_{X}$: $$f(x) = \frac{1}{\sigma_{X}\sqrt{\tau}}\exp{\left[\frac{-1}{2}\left(\frac{x-\mu_{X}}{\sigma_{X}}\right)^2\right]}$$

(Here, $\tau=2\pi$)

Let $Y$ be a random variable defined by the linear transformation $$Y = u(X) = aX+b$$

Let $v(y) = u^{-1}(y) = \frac{y-b}{a}$. Then $v^\prime(y) = \frac{1}{a}$.

Prove: $Y$ is normally distributed, with density function $$g(y)=f\bigl(v(y)\bigr)\,\bigl|v^\prime(y)\bigr|$$ $$= \frac{1}{\sigma_{X}\sqrt{\tau}}\exp{\left[\frac{-1}{2}\left(\frac{\frac{y-b}{a}-\mu_{X}}{\sigma_{X}}\right)^2\right]}\,\left|\frac{1}{a}\right|$$ $$= \frac{1}{\left|a\right|\sigma_{X}\sqrt{\tau}}\exp{\left[\frac{-1}{2}\left(\frac{y-(a\mu_{X}+b)}{a\sigma_{X}}\right)^2\right]}$$ with mean $\mu_{Y} = a\mu_{X}+b$ and standard deviation $\sigma_{Y} = \left|a\right|\sigma_{X}$.

  • 0
    Are you sure about this problem? $$h(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-1}{2}\left(\frac{(mx+b)-\mu}{\sigma}\right)^2}$$ is a function $R \to (0,1/\sigma\sqrt{2\pi})$ and so, with $X$ being a normal random variable, $h(X)$ is _definitely_ not a normal random variable.2012-09-21
  • 0
    Did you give your own interpretation of the problem? The normal (unintended pun) question is to say show that $g(X)$ has normal distribution (and find the mean, variance of $g(X)$). The question as posed makes no sense.2012-09-21
  • 0
    @TestSubject528491: I am typo-prone, but I **mean** $g(X)$. The function $g(t)=mt+b$ is indeed a linear function. But $g(X)$ is a normally distributed random variable, mean $m\mu+b$, variance $m^2\sigma^2$. Very standard stuff, that I have taught and used many times.2012-09-21
  • 0
    ::Foot-in-mouth:: comment deleted2012-09-21
  • 0
    There is an elegant proof of this using generalized polynomial chaos that no one seems to appreciate. --wistful sigh--2012-11-04

1 Answers 1

5

If $X$ is a normally distributed random variable, then so is $g(X) = mX+b$. But the proposition as you stated it is wrong.

A proof can go like this (first assume $m>0$, then mutatis mutandis): $$ \begin{align} F_{g(X)}(y) & = \Pr(g(X) \le y) = \Pr(mX+b \le y) = \Pr\left(X \le \frac{y-b}{m}\right) \\[12pt] & = \int_{-\infty}^{(y-b)/m} \frac{1}{\sqrt{2\pi}}\cdot\exp\left(\frac{-(x-\mu)^2}{2\sigma^2}\right)\,\frac{dx}{\sigma} ; \end{align} $$ therefore $$ f_{g(X)}(y) = \frac{d}{dy} \int\cdots\cdots\text{(ditto)}\cdots\cdots\quad = \frac{1}{\sqrt{2\pi}} \exp \left( \frac{-\left(\frac{y-b}{m}-\mu\right)^2}{2\sigma^2}\right)\frac{1}{\sigma}\cdot\frac{d}{dy}\,\frac{y-b}{m}, $$ and then simplify.