2
$\begingroup$

If $A=\left(\begin{array}{cc}1&1\\1&1\end{array}\right)$. How to calculate and prove that the Moore Penrose pseudoinverse of $A$ is equal to $\left(\begin{array}{cc}\frac{1}{4}&\frac{1}{4}\\\frac{1}{4}&\frac{1}{4}\end{array}\right)$? And thank you very much.

  • 1
    You can start by stating the definition. That is what you have to work with.2017-02-02
  • 0
    Welcome to the site by the way, it seems you are already familiar with typesetting which is good as it increases the chances for positive reception. Additionally it can be good to know that when asking questions one is often encouraged to show any own attempts, especially more the more the question looks like a homework question.2017-02-02

2 Answers 2

2

Your $A$ is selfadjoint, so the Moore-Penrose inverse will be simply the inverse of the restriction to the orthogonal complement of the kernel. You have $$ A= 2 P_1 + 0\,(I-P_1), $$ where $P_1=\begin{bmatrix} 1/2&1/2\\ 1/2&1/2\end{bmatrix}$. On the range of $P_1$, $A$ acts as multiplication by $2$. So $$ A^+=\frac12\,P_1=\begin{bmatrix} 1/4&1/4\\ 1/4&1/4\end{bmatrix}. $$


When $A$ is not selfadjoint, we usually define $A^+=A^*(AA^*)^+$ (which is equal to $(A^*A)^+A^*$).

  • 0
    Dear Professor Martin. Is the formula $$A=2P_1+0(I−P_1)$$ true even in infinite dimentional? Thanks2018-12-19
  • 0
    Not sure what you mean. What's $A$?2018-12-19
  • 0
    I mean when A is a self adjoint operator and not a matrix.2018-12-19
  • 0
    I mean is there a practise formula to calculate the pseudo inverse of an operator on a Hilbert space?2018-12-19
  • 0
    No. In your question you had a rank-one selfadjoint operator. The argument tells you nothing about finding pseudo-inverses in general.2018-12-19
2

One way is to find a full rank decomposition of $A$, say $A=BC$, where $B$ is left invertible (rank equal to the number of columns) and $C$ is right invertible (rank equal to the number of rows). This can be accomplished in various ways: LU decomposition, QR decomposition or SVD decomposition.

For the LU decomposition, we write $A$ as the product $P^TLU$, where $P$ is a permutation matrix, $L$ is lower triangular and $U$ is in row echelon form. In this case $$ A=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} $$ Then we get $C$ by removing the zero row from the row echelon form matrix and $B$ from the lower triangular matrix by removing the last column (in general, remove the last $k$ columns, where $k$ is the number of zero rows).

When $B$ is left invertible, it can be shown that its pseudoinverse $B^+=(BB^T)^{-1}B^T$; if $C$ is right invertible, then $C^+=C^T(C^TC)^{-1}$ and also $A^+=C^+B^+$ (you find the theory on several textbooks).

In this case $B=\begin{bmatrix}1\\1\end{bmatrix}$, so $BB^T=[2]$ and $$ B^+=\frac{1}{1}\begin{bmatrix} 1 & 1 \end{bmatrix} $$ Similarly, $C=\begin{bmatrix} 1 & 1 \end{bmatrix}$, so $C^TC=[2]$ and $$ C^+=\frac{1}{2}\begin{bmatrix}1\\1\end{bmatrix} $$ Thus $$ A^+=C^+B^+= \frac{1}{4}\begin{bmatrix}1\\1\end{bmatrix} \begin{bmatrix}1 & 1\end{bmatrix}= \frac{1}{4}\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} $$

With the singular value decomposition $A=U\Sigma V^H$, where $U$ and $V$ are unitary, we have $$ A^+=V\Sigma^+U^H $$ where $\Sigma^+$ is simply the pseudodiagonal matrix (diagonal if $A$ is square) obtained by inverting the nonzero singular values.