Suppose we have two random variables $X$ and $Y$ with means $\mu_x, \mu_y$ and variances $\sigma_{X}^{2}$ and $\sigma_{Y}^{2}$. How would we derive $\text{Var} \left(\frac{X}{Y} \right)$?
Edit. $X$ and $Y$ are normally distributed.
Suppose we have two random variables $X$ and $Y$ with means $\mu_x, \mu_y$ and variances $\sigma_{X}^{2}$ and $\sigma_{Y}^{2}$. How would we derive $\text{Var} \left(\frac{X}{Y} \right)$?
Edit. $X$ and $Y$ are normally distributed.
As soon as the distribution of a random variable $Z$ has positive continuous density at zero, $\frac1Z$ is not integrable.
In the case at hand, $\frac1Y$ is not integrable. Since $X$ is independent of $Y$, $\frac{X}Y$ is not integrable either, a fortiori the variance of $\frac{X}Y$ does not exist, except in the degenerate case when $\sigma_Y^2=0\ne\mu_Y$.
To show the first assertion, consider $Z$ with density at least $\varepsilon\gt0$ on the interval $(-z,z)$. Then $ \mathbb E\left(\frac1{|Z|}\right)\geqslant\int_{-z}^z\frac\varepsilon{|t|}\,\mathrm dt=+\infty. $
To illustrate the problem let's look at a little R code. I'll define a routine that samples $n$ times from a normal distribution (calling the results $X$) and $n$ times from another normal distribution (calling the result $Y$) and then returns the variance of $Z=X/Y$.
f <- function(n) { X <- normr(n); # the operator '<-' is assignment to a variable Y <- normr(n); Z <- X / Y; var(Z) }
Let's look at the output for a few different random samples:
> f(1e6) [1] 14135397 > f(1e6) [1] 706438.6 > f(1e6) [1] 5685218 > f(1e6) [1] 11334216 > f(1e6) [1] 2090359
You can see that we're getting results from as low as 700,000 up to more than 14,000,000. The variance is completely dominated by large values of $Z$, corresponding to values of $Y$ near zero. This is what non-integrability looks like "in practice".