0
$\begingroup$

Let $M$ be a symmetric positive-definite real matrix, $v$ a vector and $P$ the orthogonal projection on the orthogonal of $v$. First, I am wondering if $PM$ is diagonalizable. Then, is $PM$ the square of diagonalizable matrix?

$M$ is symmetric and real, hence $M=QD^2 Q^{-1}$ for some orthogonal matrix $Q$ and diagonal matrix $D$. The hyperplane $v^\perp$ is stable by $PM$ and we can choose an orthogonal basis $(e_1,\dots,e_n)$ where $H$ is spanned by $(e_1,\dots,e_{n-1})$ and $e_n$ is colinear with $v$. But I can't find much more to say...

1 Answers 1

1

I'll use primes to denote "transpose" to make typing easier.

Assume $v$ is a unit vector. Pick $w_1, \ldots, w_{n-1}$ so that $\{w_1, w_2, \ldots, w_{n-1}, v\}$ is an orthonormal basis, and let $W$ be the matrix containing these vectors as its columns. Then $$ (W' M W)' = W' M' W'' = W' M W $$ so $W' M W$ is symmetric. Also, $$W'PW = W' (I - v v') W = W'IW - W'v v'W = I - (W'v)(W'v)' $$ so $W' P W$ is also a projection matrix, with projection vector $W'v$.

Now since $W' v = e_n$, this means that $W'PW$ is the $n \times n$ identity, except that the bottom right entry is a zero.

Conclusion of this initial discussion: the problem for $P$ and $M$ can be converted, by a change of basis, to a similar problem in which $v$ is just $e_n$ and $M$ is some other symmetric matrix.

So: we'll assume $v = e_n$ from here on, so that $P$ is simply that identity matrix with the bottom right entry set to $0$.


Now let's write the matrix $M$ in the form $$ \begin{bmatrix} & & & & | \\ & & N & & a \\ & & & & | \\ & - & a'& - & c \end{bmatrix} $$ where this is meant to indicate that the matrix $N$ is $(n-1) \times (n-1)$, the right-hand column of $M$ consists of the $n-1$-vector $a$, with the last entry $b$, and the bottom row is just the transpose of this.

Since $N$ is positive-definite symmetric, it's diagonalizable, with unit orthogonal eigenvectors $u_1, \ldots, u_n \in \Bbb R^{n-1}$, with positive eigenvalues $\lambda_1, \ldots, \lambda_{n-1}$. Letting $$ i : \Bbb R^{n-1} \to \Bbb R^n : \begin{bmatrix}x_1\\ \ldots \\x_{n-1} \end{bmatrix} \mapsto \begin{bmatrix}x_1\\ \ldots \\x_{n-1} \\0\end{bmatrix} $$ we get vectors $t_j = i(u_j)$ for $j = 1, \ldots, n$.

What is $Mt_j$? It's just $Nu_j$ atop $a \cdot t_j$, i.e., it's $\lambda_j u_j$ extended by something as a last entry. Hold that thought.

Now let $U$ be a matrix whose columns are $t_1, \ldots, t_{n-1}, e_n$. $U$ is evidently orthogonal. Let's compute $$ U' PM U $$ by starting from $MU$ (whose $j$th column is $Nu_j$ atop $a \cdot t_j$, except for the $n$th column, which is $Me_n$, which is $a$ atop $c$. Mutiplying by $P$ sets the last entry to $0$, so we end up with a matrix $$ PMU = \begin{bmatrix} | & & | & | \\ \lambda_1 u_1 & \ldots & \lambda_{n-1} u_{n-1} & a \\ | & & | & | \\ 0 & \ldots & 0 & 0 \end{bmatrix} $$ Multiplying that by $U'$ gives a diagonal upper left block, $0$s along the bottom, and some dot-products in the right hand column and hence the general form $$ U'PMU = \begin{bmatrix} \lambda_1 & & & u_1 \cdot a \\ & \ddots & & \vdots \\ & & \lambda_{n-1} & u_{n-1} \cdot a \\ 0 & \ldots & 0 & 0 \end{bmatrix} $$

The characteristic polynomial of that matrix is $$ c(t) = (\lambda_1 - t)\cdots (\lambda_{n-1} - t)(t) $$ with associated eigenvectors $e_1, \dots, e_{n-1}, q$, where $q$ is the eigenvector for $0$ and isn't easy to write down off the top of my head. I think it's clear that $q$ has a nonzero $n$th-component, though.

Whew! Now we know that in the $U$ basis, $PM$ has a full complement of eigenvectors, and hence $PM$ is diagonalizable (In the sense that we can form the matrix $E$ containing the eigenvectors and write $$ E^{-1} PM E $$ to get a diagonal matrix.)

On the other hand, unless all the $a \cdot u_j$ entries are zero, the matrix $PM$ is not orthogonally diagonalizable, i.e., there's no orthogonal matrix $F$ with $$ F' PM F $$ diagonal, for if there were, it would have to consist of eigenvectors of $PM$, which would have to be orthgonal, which would require that all the $a \cdot u_j$ entries be zeroes.

Note that since the $\lambda$s are all positive, and 0 is nonnegative, this matrix actually does have a square root as well.

  • 0
    "_where $q$ is the eigenvector for 0 and isn't easy to write down off the top of my head_" Isn't it clear that $q$ is colinear to $e_n$ (that's the purpose of the change of basis, I guess)?2017-02-06
  • 0
    I think the proof could be shortened a bit, starting from the fact that $N$ is diagonalizable. $N$ is invertible, so the direct sum of the kernel of $M$ and the hyperplane of vectors of zero last-coordinate is the whole vector space (the matrix of the endomorphism induced from $M$ on this hyperplane is $N$), hence $M$ is diagonalizable.2017-02-06
  • 0
    It probably *can* be shortened; you get what you pay for. :) Also: I didn't know how much you knew (like "does this person know what a direct sum is?"), so I kept it very concrete.2017-02-06
  • 0
    I got more than what I paid for :) Thank you for your answer.2017-02-06
  • 0
    As for $q$: try multiplying $U'PMU$ by $e_n$: you get the last column, which is NOT a multiple of $e_n$. Before the change of basis using $U$, that was true...afterwards...not so much.2017-02-06
  • 0
    OK, so $U'e_n$, which is nonzero, should work, I guess. At least it's sure that there is a non zero $q$, i.e. the image of $e_n$ by the change of basis :)2017-02-06
  • 0
    That sure *sounds* right, but I can never get my transposes worked out without a concrete numerical example. Sigh.2017-02-06
  • 0
    By the way, some credit to @egreg, who steered me away from my earlier and completely wrong solution. :)2017-02-06