17
$\begingroup$

It is a well known fact that if $A,B\in M_{n\times n}(\mathbb C)$ and $AB=BA$, then $e^Ae^B=e^Be^A.$

The converse does not hold. Horn and Johnson give the following example in their Topics in Matrix Analysis (page 435). Let $A=\begin{pmatrix}0&0\\0&2\pi i\end{pmatrix},\qquad B=\begin{pmatrix}0&1\\0&2\pi i\end{pmatrix}.$ Then $AB=\begin{pmatrix}0&0\\0&-4\pi^2\end{pmatrix}\neq\begin{pmatrix}0&2\pi i\\0&-4\pi^2\end{pmatrix}=BA.$ We have $e^A=\sum_{k=0}^{\infty}\frac 1{k!}\begin{pmatrix}0&0\\0&2\pi i\end{pmatrix}^k=\sum_{k=0}^{\infty}\frac 1{k!}\begin{pmatrix}0^k&0\\0&(2\pi i)^k\end{pmatrix}=\begin{pmatrix}e^0&0\\0&e^{2\pi i}\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}.$

For $S=\begin{pmatrix}1&-\frac i{2\pi}\\0&1 \end{pmatrix},$ we have $e^B=e^{SAS^{-1}}=Se^AS^{-1}=S\begin{pmatrix}1&0\\0&1\end{pmatrix}S^{-1}=\begin{pmatrix}1&0\\0&1\end{pmatrix}.$

Therefore, $A,B$ are such non-commuting matrices that $e^Ae^B=\begin{pmatrix}1&0\\0&1\end{pmatrix}=e^Be^A.$

It is clear that $\pi$ is important in this particular example. In fact, the authors say what follows.

It is known that if all entries of $A,B\in M_n$ are algebraic numbers and $n\geq 2,$ then $e^A\cdot e^B=e^B\cdot e^A$ if and only if $AB=BA.$

No proof is given. How does one go about proving that?

  • 0
    I managed to get to the article by a quite roundabout method. It does contain the proof, and its not very long and uses theorems I'm not very familiar with (I'm no analyst), I will try to understand it and write it down here.2012-07-01

2 Answers 2

2

The proof comes from Wermuth's "Two remarks on matrix exponential" (DOI 10.1016/0024-3795(89)90554-5)

It seems to me that it is enough to assume that no two distinct eigenvalues of $A$ or $B$ differ by an integer multiple of $2\pi i$ (which is true if, but not only if $\pi$ is transcendental with respect to their entries). This is of course true if $A$ and $B$ have algebraic entries.

The basic idea is to somehow reverse the exponential function, and express $A$ and $B$ as power series (in fact, polynomials) in their exponentials.

Let $m(\lambda)=\prod_j (\lambda-\lambda_j)^{\mu_j}$ be the minimal polynomial of $A$. Then by assumption $e^{\lambda_j}$ are all different, so by Hermite's interpolation theorem we can find a polynomial $f$ such that for $g=f\circ \exp$ we have that $g(\lambda_j)=\lambda_j$, and if $\mu_j>0$, then $g'(\lambda_j)=1$ and $g^{(l)}(\lambda_j)=0$ for $2\leq l< \mu_j$.

Then we have $g(A)=A$ (this part I' don't really see, but apparently it's common knowledge, it's probably a corollary of Jordan's theorem on normal form), but $g(A)=f(e^A)$, so $A=f(e^A)$, and similarly for some polynomial $h$ we have $B=h(e^B)$, so $AB=f(e^A)h(e^B)=h(e^B)f(e^A)=BA$.

2

A broad-brush approach is as follows. We want to prove that if $e^A$ and $A$ have the same number of eigenvalues, then $e^A$ is a polynomial in $A$. This is true for diagonal matrices and hence for diagonalizable matrices. Since the diagonalizable matrices are dense, it is true for all matrices.

We can avoid the density argument though. Any square matrix $A$ can be written in exactly one way as $A=S+N$ where $S$ and $N$ are polynomials in $A$ and $S$ is diagonalizable and $N$ is nilpotent. Using the series expansion for $e^A$ we find that $ e^A = e^D(I+N+\cdots+N^{k-1}) = e^D + M $ with $M=e^D(N+\cdots+N^{k-1})$. Note that $M^k=0$. As $e^D$ and $M$ are polynomials in $A$ and $N$, they are polynomials in $A$; they are also the diagonalizable and nilpotent parts in the decomposition of $e^A$.

Now suppose $e^A$ and $e^B$ commute. Using the above we have $ e^A = e^{D_A}+M_A,\quad e^B = e^{D_B}+M_B. $ Then any two of the six matrices here commute (because the first three matrices are polynomials in $e^A$ and the second three polynomials in $e^B$) and therefore $e^{D_A}$ and $e^{D_B}$ commute. Since $e^A=e^{D_A}(I-N_A)^{-1}$ etc we see that that $(I-N_A)^{-1}$ and $(I-N_B)^{-1}$ commute, which implies that $N_A$ and $N_B$ commute. So if $D_A$ and $D_B$ are polynomials in $e^{D_A}$ and $e^{D_B}$ respectively, then $AB=BA$. As $D_A$ and $D_B$ are diagonalizable, we are OK provided the number of distinct eigenvalue of $e^{D_A}$ is equal to the number of distinct eigenvalues of $D_A$ (and ditto for $e^{D_B}$ and $D_B$).