I understand that the variance of the sum of two independent normally distributed random variables is the sum of the variances, but how does this change when the two random variables are correlated?
Determining variance from sum of two random correlated variables
-
0There is also a good (and simple) explanation on [Insight Things](http://insight-things.com/why-you-can-add-variances). – 2016-03-07
4 Answers
For any two random variables: $\text{Var}(X+Y) =\text{Var}(X)+\text{Var}(Y)+2\text{Cov}(X,Y).$ If the variables are uncorrelated (that is, $\text{Cov}(X,Y)=0$), then
$\tag{1}\text{Var}(X+Y) =\text{Var}(X)+\text{Var}(Y).$ In particular, if $X$ and $Y$ are independent, then equation $(1)$ holds.
In general $ \text{Var}\Bigl(\,\sum_{i=1}^n X_i\,\Bigr)= \sum_{i=1}^n\text{Var}( X_i)+ 2\sum_{i< j} \text{Cov}(X_i,X_j). $ If for each $i\ne j$, $X_i$ and $X_j$ are uncorrelated, in particular if the $X_i$ are pairwise independent (that is, $X_i$ and $X_j$ are independent whenever $i\ne j$), then $ \text{Var}\Bigl(\,\sum_{i=1}^n X_i\,\Bigr)= \sum_{i=1}^n\text{Var}( X_i) . $
-
0@IanHaggerty No. See [this](http://math.stackexchange.com/questions/249422/why-can-2-uncorrelated-random-variables-be-dependent) for example. – 2015-11-14
You can also think in vector form:
$\text{Var}(a^T X) = a^T \text{Var}(X) a$
where $a$ could be a vector or a matrix, $X = (X_1, X_2, \dots, X_n)^T$ is a vector of random variables. $\text{Var}(X)$ is the covariance matrix.
If $a = (1, 1, \dots, 1)^T$, then $a^T X$ is the sum of all the $x_i's$.
Let's work this out from the definitions. Let's say we have 2 random variables $x$ and $y$ with means $\mu_x$ and $\mu_y$. Then variances of $x$ and $y$ would be:
${\sigma_x}^2 = \frac{\sum_i(\mu_x-x_i)(\mu_x-x_i)}{N}$ ${\sigma_y}^2 = \frac{\sum_i(\mu_y-y_i)(\mu_y-y_i)}{N}$
Covariance of $x$ and $y$ is:
${\sigma_{xy}} = \frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N}$
Now, let us consider the weighted sum $p$ of $x$ and $y$:
$\mu_p = w_x\mu_x + w_y\mu_y$
${\sigma_p}^2 = \frac{\sum_i(\mu_p-p_i)(\mu_p-p_i)}{N} = \frac{\sum_i(w_x\mu_x + w_y\mu_y - w_xx_i - w_yy_i)^2}{N} = \frac{\sum_i(w_x(\mu_x - x_i) + w_y(\mu_y - w_yy_i))^2}{N} = \frac{\sum_i(w^2_x(\mu_x - x_i)^2 + w^2_y(\mu_y - y_i)^2 + 2w_xw_y(\mu_x - x_i)(\mu_y - y_i))}{N} = w^2_x\frac{\sum_i(\mu_x-x_i)^2}{N} + w^2_y\frac{\sum_i(\mu_y-y_i)^2}{N} + 2w_xw_y\frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N} = w^2_x\sigma^2_x + w^2_y\sigma^2_y + 2w_xw_y\sigma_{xy}$