3
$\begingroup$

For a matrix $A$ let $\|A\|$ be the norm given by $\|A\|=\sup_{v \neq 0}\frac{\|Av\|}{\|v\|}$ where $\|v\|$ is the Euclidian norm on the vector $v$.

Suppose we have matrices $M$ and $S$ with leading eigenvectors $u$ and $\hat{u}$ respectively. If we have $\|M-S\| \leq \theta\|M\|$ does this imply $\|\hat{u}-u\|\leq 2\theta$, and if so, how?

This comes from the paragraph containing equation (1) on page 2 of this paper http://www.stanford.edu/~montanar/RESEARCH/FILEPAP/GossipPCA.pdf but I don't see how the result is "immediate" as they claim.

In response to user1551, $M$ is an $n \times n$ symmetric matrix, $u$ is "the" eigenvector of $M$ corresponding to the eigenvalue of largest magnitude.

$S$ is some sparsification of $M$ obtained by setting each entry of $M$ to $0$ independently with probability $1-p$ then rescaling non-zero entries by $1/p$.

$\hat{u}$ is "the leading eigenvector" of $S$.

Let's assume the entries of $M$ are all positive, then by Perron-Frobenius say that $u$ and $\hat{u}$ are non-negative unit eigenvectors, if that makes it easier.

  • 3
    Ah, the old "it follows immediately" trick.2012-12-27
  • 3
    Please elaborate more on the definitions of $M$ and $S$. For non-specialists like me, the descriptions of the two matrices in the paper are too vague. That how the eigenvectors are chosen is also unclear. For instance, if $u$ is an eigenvector, so is $ku$ for all $k\not=0$. So, does the cited paper claim that $\|k_1\hat{u}+k_2u\|\le2\theta$ for any $k_1,k_2\not=0$? Even if we take unit eigenvectors, the statement that $\|\hat{u}\pm u\|\le2\theta$ (if $u$ is an eigenvector, so is $-u$) is not true in general.2012-12-27
  • 0
    @user1551 For matrices with positive entries the leading eigenvector has positive entries; this eliminates the sign ambiguity.2012-12-29

1 Answers 1

3

I don't believe such a strong stability result. Consider the matrix $$M=\begin{pmatrix} 1.01 & 0.01 \\ 0.01 & 1 \end{pmatrix}$$ According to WolframAlpha, its leading eigenvector, normalized to unit length, is $u\approx (0.85, 0.525)^T$.

We don't need to be gurus of sparse computing to find $$S=\begin{pmatrix} 1.01 & 0 \\ 0 & 1 \end{pmatrix}$$ (up to irrelevant scaling). The leading eigenvector is obviously $\widehat u=(1,0)^T$.

In conclusion, $\|M-S\|\le 0.01\|M\|$ but $\|u-\widehat u\| > 0.5$.


I hear that the book The Algebraic Eigenvalue Problem by Wilkinson is where stability results of this kind are found. I never read the book myself, though.

  • 0
    This example doesn't work. By the OP's explanation on the construction of $S$, $S$ is either the zero matrix (when $p=1$) or it is scaled by a factor larger than $1$ (when $p<1$) after setting some entries of $M$ to zero.2012-12-29
  • 0
    But apparently your example can be modified to work, though. So +1.2012-12-29
  • 0
    @user1551 Scaling does not affect eigenvectors, as my parenthetical remark indicated.2012-12-29
  • 0
    Yes, but scaling would affect $\|M-S\|$. So you need a continuity argument to fill the loophole.2012-12-29
  • 0
    @user1551 Valid point. However, the purpose of scaling is to bring $S$ closer to $M$, not to make it more different. (The goal of sparsification is to produce a sparse matrix that is close to the original one). If keeping the size of errors much less than $1$ wasn't important, we could just say that $\|u-\widehat u\|\le 2$ and save ourselves a lot of time.2012-12-29
  • 0
    The OP says that $S$ is scaled by $1/p$, so I'm not so sure if minimizing $\|M-S\|$ is the only goal. Actually that sparsification procedure is a bit weird: when $p\approx0$, the $S$ before scaling should be already quite different $M$, but $S$ is then rescaled by a large factor $1/p$. So I completely can't see what is intended. But that's another story anyway.2012-12-29