I tried to construct a proof of above statement by using matrices. It goes like this :- Let $a$ be any arbitrary element in a $C^*$ algebra. Consider the matrix $\begin{pmatrix}0 & a\\a^* & 0 \end{pmatrix}$. This is a self adjoint operator. Hence its spectrum is in $\mathbb{R}$. The square of this matrix is $\begin{pmatrix} aa^* &0\\0 & a^*a\end{pmatrix}$. Being square of a self adjoint operator, its spectrum lies in $[0,\infty)$. Hence the spectrum of $aa^*$ is also contained in $[0,\infty)$. I have a doubt that whether this proof is correct, but could not point the mistake. Please help.
Proving $aa^*\geq0$ in a $C^*$ algebra
-
2Your proof seems to indicate that you have the concrete definition of $C^*$-algebra in mind, i.e., as bounded operators on a Hilbert space. Is that so? In the abstract version of the theory, elements of the form $aa^*$ are usually taken as positive by definition, and the challenge is to prove that such elements form a cone. – 2012-04-13
-
0@HaraldHanche-Olsen: Actually, not necessarily: Blackadar's *Operator Algebras* takes $x = x^*$ and $\sigma(x) \subseteq [0,\infty)$ as the definition of positivity. – 2012-04-13
-
1@MartinWanvik: Well, all the more reason for rsg to tell us where he is coming from. – 2012-04-13
-
0@ Harald Hanche-Olsen I have started learning $C^*$ algebra from a mathematical point of view. I am basically a physics student, started working in quantum mechanics. So I am not sure whether I am missing something, however I agree, in the back of my mind, I represented it in $\mathcal{B(H)}$ for some Hilbert space $\mathcal{H}$. – 2012-04-14
-
0@ Martin Wanvik In the case of matrices, say, we can prove that a matrix $A$ is positive (semi-definite) iff it can be written as $A=BB^*$ for some $B$. I was trying to construct the same for a $C^*$ algebra. – 2012-04-14
-
0@rsg: First of all, if you want other people to be notified about your response(s), you need to drop the space after the @ (also, you need to delete the spaces inside the user names - see http://meta.stackexchange.com/questions/43019/how-do-comment-replies-work). And yes, I know what you're trying to do. – 2012-04-14
1 Answers
Writing $$ M := \begin{bmatrix} aa^* & 0 \\ 0 & a^*a \end{bmatrix} \in M_2(A), $$ the only thing missing (as far as I can tell - hopefully someone will set me straight if I've missed something) is a proof that $ \sigma(aa^*) \subseteq \sigma (M)$, and that shouldn't be too difficult. Given $\lambda \in \mathbb{C}$, it is easy to see that if $$ \begin{bmatrix} aa^* - \lambda 1 & 0 \\ 0 & a^*a - \lambda 1 \end{bmatrix} \begin{bmatrix} b & c \\ d & e \end{bmatrix} = \begin{bmatrix} b & c \\ d & e \end{bmatrix} \begin{bmatrix} aa^* - \lambda 1 & 0 \\ 0 & a^*a - \lambda 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, $$ for $b,c,d,e \in A$, then $(aa^* - \lambda 1)b = b(aa^* - \lambda 1) = 1$. So if $aa^* - \lambda 1$ is not invertible in $\widetilde{A}$ (the unitization of $A$), then $M - \lambda I$ is not invertible in $M_{2}(\widetilde{A})$, and hence $\sigma(aa^*) \subseteq \sigma(M)$. This proves the result.