$M_n$ is a $n\times n$ matrix with $M_{n+1}=\begin{pmatrix}M_n & a_n \\ b_n^T & c_n\end{pmatrix}$ and $a_n, b_n, c_n \to 0$ for $n\to \infty$. Is this sufficient to state $ \lim_{n\to\infty}(M_n^{-1}) = (\lim_{n\to\infty}M_n)^{-1} ?$
Does $M_n^{-1}$ converge for a series of growing matrices $M_n$?
-
0@mpiktas: I'm going to add an answer with some more info than my comments tomorrow. – 2011-03-02
2 Answers
Let's $ \Omega(\mathbb{R}^{n\times n})=\{ X\in\mathbb{R}^{n\times n}: \mbox{ there is } X^{-1} \} $ and $\Omega(\mathbb{R}^{\mathbb{N}\times \mathbb{N}}) = \bigcup_{n\in\mathbb{N}}\Omega(\mathbb{R}^{n\times n})$, i.e. $ \Omega(\mathbb{R}^{\mathbb{N}\times \mathbb{N}}) = \left\{ X : \mbox{ there is } n_0\in\mathbb{N} \mbox{ and there is } X^{-1}\in\Omega(\mathbb{R}^{n_0\times n_0}) \right\} $ Note that $\Omega(\mathbb{R}^{n\times n})$ and $\Omega(\mathbb{R}^{\mathbb{N}\times \mathbb{N}})$ are vectorial spaces whit usual matrix sum and scalar product. Supose that $M_n\in\Omega(\mathbb{R}^{p_n\times p_n})$ and $c_n\in\Omega(\mathbb{R}^{q_n\times q_n})$ such that $p_n+q_n=n$ for all $n\in\mathbb{N}$ .
Proposition For all positive $k\in\mathbb{N}$ the aplication $ \Omega(\mathbb{R}^{k\times k})\ni X\mapsto X^{-1}\in\Omega(\mathbb{R}^{k\times k}) $ is continuous.
Proof. For see it's use block matrix inversion formula $ \begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix}^{-1} = \begin{pmatrix} \mathbf{A}^{-1}+\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1} & -\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1} \\ -(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1} & (\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1} \end{pmatrix} $ and induction on $k$.
PROPOSITION If $\displaystyle\lim_{n\to\infty}M_n$ converge and $\displaystyle\lim_{n\to\infty}M_n\in\Omega(\mathbb{R}^{\mathbb{N}\times\mathbb{N}})$ for all $p_n+q_n=n$ and $n\uparrow n_0\in\mathbb{N}$ then $ \lim_{n\to \infty}( M_n)^{-1}= \big(\lim_{n\to \infty} M_n \big)^{-1}. $
Proof. Similarly, use the block matrix inversion formula and induction on the $p_n+q_n=n$.
This considerations are a bit more than a comment, but not yet a real answer.
I look at such situation, where I also want to discuss the inverse in terms of the LDU- decomposition of $\small M_n$ Then L and U are triangular and D is even diagonal, so if $\small M_m $ is invertible at all, so is D too; L and U are also invertible.
The entries in all that matrices are constant with increasing matrix-size n, and this is also true for their inverses: increase of the matrix-size appends new entries only.
Now, the inverse $\small Q_n = M_n^{-1} $ is also $\small U^{-1} \cdot D^{-1} \cdot L^{-1} $ (let's denote $\small V=U^{-1} \qquad W=L^{-1} $ and their entries with the respective small letters v and w) and for instance the top-left element in $\small Q_n$, call it $\small q_{n:0,0} $ is the sum $\small \displaystyle \sum_{k=0}^n {v_{0,k} \cdot w_{k,0} \over d_{k,k}} $ from the first row in V and the first column in W. Now if n goes to infinity then clearly that sum can diverge even if its summands happen to converge to zero (harmonic series, for instance) and the limit-matrix "does not exist" (or "is not well defined" or "contains singularities") in such cases. So it is obvious, that there must be some more restrictions on your $\small a_n, b_n, c_n $ to make sure that you don't get into such singularities.