3
$\begingroup$

I have this very simple question:

If I have a matrix $A$ and a matrix $B$. Matrix $A$ is close to matrix $B$, can I then conclude that matrix $A^{-1}$ is close to $B^{-1}$?

In other words can I conclude that this is true: $A$ close to $B \Leftrightarrow A^{-1}$ close to $B^{-1}$ ?

  • 2
    The inverse operator is continuous on the set of invertible matrices, which means that for any $A$ and any neighborhood $U$ of $A^{-1}$, there is$a$neighborhood of $V$ of $A$ such that the inverse of all elements of $V$ are in $U$. The size of $V$ depends on both the size of $U$ and $A$ itself, but that's true even for real numbers (aka $1\times 1$ matrices:) If $a$ is very close to zero, then the set of $b$ such that $1/b$ is close to $1/a$ is going to be very small.2011-10-27

2 Answers 2

4

I edit my question, thanks to Michael. I still would say that it depends on what do you mean with 'close'. As Thomas has already wrote you, the inverse operator is continuous on an (open) set of invertible matrices. Moreover, for any such matrix $C$ there exists $\delta>0$ such that all matrices which are $\delta$-closed to $C$, say in sup-norm $ \|A\| = \max\limits_{ij}|a_{ij}| $ are also invertible (here I denoted $A = (a_{ij})_{i,j=1}^n$.). It eaily follows from the fact that determinant is a continuous function on the space of square matrices (of any fixed dimension).

Though I would emphasize that uniform continuity does not hold. That was a kind of closeness which I've understood from your question and I suggest you to be aware of it. You can even verify it on 1-dim matrices which are just real numbers: $ |0.001-0.002| = 0.001 \text{ but }|(0.001)^{-1}-(0.002)^{-1}| = 1000. $

Further, $B$ may have even no inverse, but for any $\delta>0$ you can find $A(\delta)$ which has an inverse but $\|A(\delta)-B\|\leq \delta.$

Finally, it's maybe worth to mention that $A^{-1}$ may be close to $B^{-1}$ even if $A$ and $B$ are not close. For this case consider $A = 1000$ and $B = 2000$.

  • 0
    @Michael: thank you for the suggestion, I've edited my question to catch that point also.2011-10-28
5

No you cannot, in general. But the extent to which you can, for a given matrix $A$, is quantified by the condition number $\kappa(A)$ of the matrix $A$, as the linked page explains competently.


For the sake of completeness, I append the answer I wrote for the other question (now closed as a duplicate) asked by the OP, whose formulation was:

Ill-conditioned matrix Consider this problem: $Ax=b$. I want to solve it/find $x$ and the matrix $A$ is ill-conditioned. Why is the fact that A is ill-conditioned a bad thing?

Because if one knows the coefficients of $A$ only up to a given precision, a small variation in them will cause a huge variation in the coefficients of $A^{-1}$, hence, presumably, in the solution $x=A^{-1}b$. Alternatively, even if $A^{-1}$ is known with an absolute precision, if one knows the coefficients of $b$ only up to a given precision, a small variation in them will cause a notable variation in the coefficients of the solution $x=A^{-1}b$ since some coefficients of $A^{-1}$ are large. In real life, both effects are often conspiring. See the definition of the condition number of a matrix.

  • 0
    @J.M. This seems only natural to rely on a norm (or any kind of metrics) to approach statements like *if matrices A and B are close, then matrices C and D are close*.2011-10-28