3
$\begingroup$

This assertion came up in a Deep Learning course I am taking. I understand intuitively that the eigenvector with the largest eigenvalue will be the direction in which the most variance occurs. I understand why we use the covariance matrix's eigenvectors for Principal Component Analysis.

However, I do not get why the eigenvectors' variance are equal to their respective eigenvalues. I would prefer a formal proof, but an intuitive explanation may be acceptable.

(Note: this is not a duplicate of this question.)

  • 0
    As you build the matrix $M$ as a sum of outer products : $M = \sum{v}{v}^T$ what ends up in the respective elements are the expected values $M_{ij} = E[v_{i}v_{j}]$ with i and j being vector positions. This is all before any transformation to the space of principal vectors is done. Maybe it can help figuring out the rest.2017-02-16
  • 0
    As mentioned in Omnoms answer $M_{ij}$ will contain $\sum v_iv_j$ which is one way to estimate of $E[X_i X_j]$, assuming the components of vectors $v_i$ are samples drawn from the $X_i$ which are the random variables.2017-02-16
  • 0
    An eigenvector of a covariance matrix is not a random vector, so the variance of an eigenvector does not make sense. If it was a random vector, it would make more sense to talk about the covariance matrix of this random vector and not the variance.2017-06-30

1 Answers 1

11

Here's a formal proof: suppose that $v$ denotes a length-$1$ eigenvector of the covariance matrix, which is defined by $$ \Sigma = \Bbb E[XX^T] $$ Where $X = (X_1,X_2,\dots,X_n)$ is a column-vector of random variables with mean zero (which is to say that we've already absorbed the mean into the variable's definition). So, we have $\Sigma v = \lambda v$ (for some $\lambda \geq 0$), and $v^Tv = 1$.

Now, what do we really mean by "the variance of $v$"? $v$ is not a random variable. Really, what we mean is the variance of the associated component of $X$. That is, we're asking about the variance of $v^TX$ (the dot product of $X$ with $v$). Note that, since the $X_i$s have mean zero, so does $v^TX$. We then find $$ \Bbb E([v^TX]^2) = \Bbb E([v^TX][X^Tv]) = \Bbb E[v^T(XX^T)v] = v^T\Bbb E(XX^T) v \\ = v^T\Sigma v = v^T\lambda v = \lambda(v^Tv) = \lambda $$ and this is what we wanted to show.

  • 0
    Perfect answer. Thank you very much.2017-02-16