-2
$\begingroup$

I have two questions that are related to calculating autocovariances and autocorrelations of processes.

Let's say we have a simple process:

$y_t = ε_t + ε_{t-1}$

$ε_{t} =$ standard normal random variable

To calculate the lag 1 autocovariance, we'll do the following:

$Cov(y_t,y_{t-1}) $

$= Cov(ε_t + ε_{t-1}, ε_{t-1} + ε_{t-2})$

$=Cov(ε_{t-1},ε_{t-1})$

$=1*1*1$

$=1$

To calculate $Cov(ε_{t-1},ε_{t-1})$, I know you can take the coefficient in front of each $ε_{t-1}$ and multiply it by the variance of ε_{t-1} to get 1*1*1. So I know I can do this little trick to get the covariance, but I don't understand why you do this. Why can you just make this multiplication to get the covariance?

Which brings me to my next question. For the same process, can you do the same things to calculate the lag 1 autocorrelation?

$Cor(y_t,y_{t-1}) $

$= Cor(ε_t + ε_{t-1}, ε_{t-1} + ε_{t-2})$

$=Cor(ε_{t-1},ε_{t-1})$

$=???$

If I can use this same method for autocorrelation, what trick can I use to calculate $=Cor(ε_{t-1},ε_{t-1})$?

  • 0
    Do you know the definition of correlation?2012-03-04

2 Answers 2

1

You have $Cov(X,X)=Var(X)$, and if $X$ and $Y$ are independent then $Cov(X,Y) = 0$ and $Var(X+Y)=Var(X)+Var(Y)$, so

$Cor(y_t,y_{t-1})=\dfrac{Cov(y_t,y_{t-1})}{\sqrt{Var(y_t)}\sqrt{Var(y_{t-1})}}$

$=\dfrac{Cov(\varepsilon_t + \varepsilon_{t-1}, \varepsilon_{t-1} + \varepsilon_{t-2})}{\sqrt{Var(\varepsilon_t + \varepsilon_{t-1})}\sqrt{Var(\varepsilon_{t-1} + \varepsilon_{t-2})}}$

$=\dfrac{Cov(\varepsilon_t , \varepsilon_{t-1})+Cov(\varepsilon_t , \varepsilon_{t-2})+Cov( \varepsilon_{t-1}, \varepsilon_{t-1} )+Cov( \varepsilon_{t-1}, \varepsilon_{t-2})}{\sqrt{Var(\varepsilon_t + \varepsilon_{t-1})}\sqrt{Var(\varepsilon_{t-1} + \varepsilon_{t-2})}}$

$=\dfrac{0+0+Var(\varepsilon_{t-1})+0}{\sqrt{Var(\varepsilon_t) + Var(\varepsilon_{t-1})}\sqrt{Var(\varepsilon_{t-1}) + Var(\varepsilon_{t-2})}}$ by independence

$=\dfrac{V}{\sqrt{V + V}\sqrt{V + V}}=\dfrac{1}{2}$

where $V=Var(\varepsilon_t)= Var(\varepsilon_{t-1}) = Var(\varepsilon_{t-2})$.

0

The point is that $Cov(X+Y,Y+Z) = Cov(X,Y)+Cov(X,Z)+Cov(Y,Y)+Cov(Y,Z)$: covariance is bilinear. $Cov(X,Y)$, $Cov(X,Z)$, $Cov(Y,Z)$ are $0$ if $X,Y,Z$ are independent. The covariance of a random variable and itself is the same as the variance of that random variable (look at the definitions, you'll see they are the same). A standard normal random variable has variance $1$.

As for correlation, assuming you mean the Pearson correlation, the definition of that is $Cor(X,Y) = Cov(X,Y)/(\sigma_X \sigma_Y)$. But correlation is not bilinear: $Cor(X+Y,Y+Z) \ne Cor(X,Y) + Cor(X,Z) + Cor(Y,Y) + Cor(Y,Z)$ because the standard deviations are different.