0
$\begingroup$

I am trying to prove the sentence "A short calculation shows the above leads to M u = λ u (symmetry of M is needed here). Therefore λ is the largest eigenvalue of M." from this wikipedia site about Singular Value Decomposition. Shortly, show $\nabla \bar{x}^{T} M \bar{x} = \lambda ( \nabla \bar{x}^{T} \bar{x} )$

satisfying $M \bar{u} = \lambda \bar{u}$ where $\bar{u}$ is the maximum eigen vector, $\bar{M}$ is a symmetric matrix and $\lambda$ is eigen value, corresponding to the maximum eigen vector.

[Update]

Maximize $\bar{x}^{T} M \bar{x}$ over all $\bar{x}$ such that $||\bar{x}||=1$. Lagrange function is $f(x) = \bar{x}^{T} M \bar{x} + \mu (||\bar{x}||- 1)$

Now I need to calculate $\nabla f(x)$

calculating, and trying to show that the largest eigen value occur with the constraint...calculating and thinking.

Nabla operations here.

  • 0
    @hhh : Regarding your second edit, the del operator kills the constant term $\mu$, and the rest is in my previous comment. And $\mu$ becomes $\lambda$ on Wikipedia2011-09-08

1 Answers 1

1

I'm not sure we agree on the interpretation of the Wikipedia paragraph. What it's stated there (as I see it) is that, starting from the equation

$\nabla \bar{x}^{T} M \bar{x} = \lambda ( \nabla \bar{x}^{T} \bar{x} )$

we obtaing that (if M is symmetric) $\lambda$ must be an eigenvalue - and hence $x$ and eigenvector. This is true, and it's rather easy from these two (useful) gradients:

$\nabla (x^T x) = 2 x $ $\nabla (x^T M x) = 2 M x $

These are the vectorial generalization of $\frac{\partial (x^2)}{\partial x} = 2x $ and $\frac{\partial ( m x^2) }{\partial x} = 2 m x $, and can be proved by definition.

Now, the first equation was obtained by applying Lagrange multipliers to the problem of maximizing $x^T M x$ subject to $x^T x = 1$. Specifically: we write

$\nabla [x^T M x + \mu(x^Tx - 1)]=0 \Rightarrow \nabla( {x}^{T} M {x}) = \lambda \nabla ({x}^{T} {x} ) \hspace{40px} (\lambda = -\mu)$

We get then that $\lambda$ (the Lagrange multiplier) must be $some$ eigenvalue, and $x$ some eigenvector. But we want to maximize $x^T M x $, so we must choose the eigenvector with largest eigenvalue, because $x^T M x = x^T \lambda x = \lambda$

  • 1
    now I see it, of course. And the $Q^{T}\sum Q$ is just a general case, returning matrix with scalars. $Q$ has vectors such as the $x$. Thanks. @percusse2011-09-09