3
$\begingroup$

For a matrix $A$ let $\|A\|$ be the norm given by $\|A\|=\sup_{v \neq 0}\frac{\|Av\|}{\|v\|}$ where $\|v\|$ is the Euclidian norm on the vector $v$.

Suppose we have matrices $M$ and $S$ with leading eigenvectors $u$ and $\hat{u}$ respectively. If we have $\|M-S\| \leq \theta\|M\|$ does this imply $\|\hat{u}-u\|\leq 2\theta$, and if so, how?

This comes from the paragraph containing equation (1) on page 2 of this paper http://www.stanford.edu/~montanar/RESEARCH/FILEPAP/GossipPCA.pdf but I don't see how the result is "immediate" as they claim.

In response to user1551, $M$ is an $n \times n$ symmetric matrix, $u$ is "the" eigenvector of $M$ corresponding to the eigenvalue of largest magnitude.

$S$ is some sparsification of $M$ obtained by setting each entry of $M$ to $0$ independently with probability $1-p$ then rescaling non-zero entries by $1/p$.

$\hat{u}$ is "the leading eigenvector" of $S$.

Let's assume the entries of $M$ are all positive, then by Perron-Frobenius say that $u$ and $\hat{u}$ are non-negative unit eigenvectors, if that makes it easier.

  • 0
    @user1551 For matrices with positive entries the leading eigenvector has positive entries; this eliminates the sign ambiguity.2012-12-29

1 Answers 1

3

I don't believe such a strong stability result. Consider the matrix $M=\begin{pmatrix} 1.01 & 0.01 \\ 0.01 & 1 \end{pmatrix}$ According to WolframAlpha, its leading eigenvector, normalized to unit length, is $u\approx (0.85, 0.525)^T$.

We don't need to be gurus of sparse computing to find $S=\begin{pmatrix} 1.01 & 0 \\ 0 & 1 \end{pmatrix}$ (up to irrelevant scaling). The leading eigenvector is obviously $\widehat u=(1,0)^T$.

In conclusion, $\|M-S\|\le 0.01\|M\|$ but $\|u-\widehat u\| > 0.5$.


I hear that the book The Algebraic Eigenvalue Problem by Wilkinson is where stability results of this kind are found. I never read the book myself, though.

  • 0
    The OP says that $S$ is scaled by $1/p$, so I'm not so sure if minimizing $\|M-S\|$ is the only goal. Actually that sparsification procedure is a bit weird: when $p\approx0$, the $S$ before scaling should be already quite different $M$, but $S$ is then rescaled by a large factor $1/p$. So I completely can't see what is intended. But that's another story anyway.2012-12-29