6
$\begingroup$

I'd like to find the spectral decomposition of $A$:

$$A = \begin{pmatrix} 2-i & -1 & 0\\ -1 & 1-i & 1\\ 0 & 1 & 2-i \end{pmatrix}$$

i.e. $A=\sum_{i}\lambda_i P_i$ where $P_i$ are the coordinate matrices (in the standard basis) of the corresponding orthogonal transformations in the spectral decomposition of $T_A$ and $\lambda_i$ are the eigenvalues.

I started off by showing that $A$ is normal, piece of cake.

Then found the eigenvalues of $A$, those are: $\lambda_1 = 2-i, \lambda_2 = 3-i, \lambda_3 = -i$. I tried using these known facts from the spectral theorem:

  • $A=(2-i)P_1+(3-i)P_2-iP_3$
  • $I=P_1+P_2+P_3$
  • $\forall i\neq j, P_i P_j=0$
  • $P^*_i=P_i$

The only example I have in my book uses these but I couldn't get it to work here. The terms don't cancel out it seems.

What else can I try?

  • 1
    The $P_i$ are the orthogonal projections onto the Eigenspaces, so you need to find the eigenspaces of the eigenvectors. Find a nonzero eigenvector for each eigenvalue; because $A$ is normal, the eigenvectors will be mutually orthogonal. Then you just need to normalize them; the $P_i$ are given by the orthogonal projections onto the subspaces spanned by these vectors.2011-04-19
  • 0
    Shouldn't $P_i$ be the *projection* onto the $i$-th eigenspace? Also, shouldn't they sum to the identity instead of to 0?2011-04-19
  • 0
    @Arturo: I'm stuck after finding the eigenspaces. I got $$V_{\lambda_1}=sp{(\frac{1}{\sqrt 2}, 0, \frac{1}{\sqrt 2})}, V_{\lambda_2}=sp{(\frac{-1}{\sqrt 3}, \frac{1}{\sqrt 3}, \frac{1}{\sqrt 3})}, V_{\lambda_3}=sp{(\frac{-1}{\sqrt 6}, \frac{-2}{\sqrt 6}, \frac{1}{\sqrt 6})}$$ And I know that if $v=v_1+v_2+v_3, v_i \in V_{\lambda_i}$ then $P_i = v_i$. But how do I actually find $[P_i]$? I'm confused.2011-04-20
  • 1
    The $P_i$ are the orthogonal projections onto the spans of the eigenspaces. The eigenvectors you have are already an orthonormal basis, and the matrices of $P_i$ relative to that basis are very easy: for example, if you order your basis as $\beta=[v_{\lambda_1},v_{\lambda_2},v_{\lambda_3}]$, then $$P_1 = \left(\begin{array}{ccc}1&0&0\\0&0&0\\0&0&0\end{array}\right).$$ To write $[P_i]_{\beta}$ in terms of the standard orthonormal basis of $\mathbb{C}^3$, just take the change of basis matrix $M$, and compute $M[P_i]_{\beta}M^{-1}$ (cont)2011-04-20
  • 0
    Here, $M$ changes from the $\beta$ basis to the standard basis, so the columns of $M$ are the vectors of $\beta$.2011-04-20
  • 0
    Just curious. Are you avoiding the usual eigendecomposition on purpose to exploit the normality structure?2011-09-16

3 Answers 3

1

Using the primary decomposition theorem (PDT): Find the minimal polynomial of $A$. Clearly that would be $m_A(x)=(x-\lambda_1)(x-\lambda_2)(x-\lambda_3)$. Define $f_i(x)=\frac{m_A(x)}{(x-\lambda_i)}$. Observe that $f_1,...,f_3$ are co-prime (i.e. $gcd(f_1,f_2,f_3)=1$). Hence you can find polynomials $g_1,g_2,g_3$ such that $g_1f_1+g_2g_2+g_3f_3=1$. \ Finally, define $P_i=g_i(A)f_i(A)$. Check why does it work!

1

The following result is useful

"A matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix A satisfying the equation $A^{*}A=AA^{*}$ is diagonalizable". That is

$$ \mathbf{A} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^* . $$

The next step is to find the eigenvectors $v_i$. Once that done, then you can get the matrices $P_i$ such that

$$ P_i = v_iv^{*}_i, $$

which gives,

$$ A=(2-i)P_1+(3-i)P_2-iP_3 = (2-i)v_1v^{*}_1+ (3-i)v_2v^{*}_2-i v_3v^{*}_3. $$

Note: You should check that $P_i,i=1,2,3$ satisfy the properties you listed above.

Eigenvectors: I computed the eigenvectors by Maple which corresponds to the eigenvalues that you already computed $2-i,3-i,-i$ respectively

$$v_1= \left[ \begin {array}{c} 1\\0 \\1 \end {array} \right ], v_2=\left[ \begin {array}{c} -1\\1 \\1 \end {array} \right ], v_3=\left[ \begin {array}{c} -1\\-2 \\1 \end {array} \right ] $$

0

it is also possible to compute the Projection matrices without finding each eigen vector (corresponding to each eigen-value) of the matrix A. The trick is to use the Caley-Hamilton Theorem. First find the partial fraction decomposition of 1/charpoly(A,x) where charpoly(A,x) is the characteristics polynomial of the matrix A factored in the complex field as linear factorr.