6
$\begingroup$

I think I have heard that the following is true before, but I don't know how to prove it:

Let $A$ be a matrix with real entries. Then the minimal polynomial of $A$ over $\mathbb{C}$ is the same as the minimal polynomial of $A$ over $\mathbb{R}$.

Is this true? Would anyone be willing to provide a proof?


Attempt at a proof:

Let $M(t)$ be the minimal polynomial over the reals, and $P(t)$ over the complex numbers. We can look at $M$ as a polynomial over $\Bbb C$, in which case it will fulfil $M(A)=0$, and therefore $P(t)$ divides it. In addition, we can look at $P(t)$ as the sum of two polynomials: $R(t)+iK(t)$. Plugging $A$ we get that $R(A)+iK(A)=P(A)=0$, but this forces both $R(A)=0$ and $K(A)=0$. Looking at both $K$ and $R$ as real polynomials, we get that $M(t)$ divides them both, and therefore divides $R+iK=P$.

Now $M$ and $P$ are monic polynomials, and they divide each other, therefore $M=P$.

Does this look to be correct?


More generally, one might prove the following

Let $A$ be any square matrix with entries in a field$~K$, and let $F$ be an extension field of$~K$. Then the minimal polynomial of$~A$ over$~F$ is the same as the minimal polynomial of $A$ over$~K$.

  • 0
    Arturo: My apologies. I posted the question, then realized I have a proof (hence "Think before you leap"). Thank you as always - next time I have a question, I'll register.2011-09-21

4 Answers 4

0

This looks correct.

Another way to see it is that you can find the minimal polynomial of the matrix by computing the invariant factors of the matrix $A-XId$ over $\mathbb{R}$. Since the same process (with same operations) may be done over $\mathbb{C}$, their minimal polynomial is the same.

sorry, i don't know the english word for the "invariant factors", i mean the process that using only row and columns operations, the matrix $A-XId$ may be uniquely writtten as some zero and a sequence of polynomial in the diagonal in which any polynomial divides the next one, and where the first is the minimal polynomial $A$ and the last the characteristic polynomial of $A$.

  • 0
    Don't apologise, I'm having trouble with English as well! Since Arturo posted what seems like a more straightforward proof (well, it's the one I thought of...), I've accepted his answer, but thank you for your input and I will consider your idea.2011-09-21
9

Another way of proving this fact may be observing that ''you do not go out the field while using Gaussian elimination''. More precisely:

Proposition. Let $K \subseteq F$ be a field extension let $v_1, \dots, v_r \in K^n$. If $v_1, \dots, v_r$ are linearly dependent over $F$, then they are linearly dependent over $K$.

Proof. We'll prove the contrapositive of the statement. Suppose that the $v_i$'s are linearly independent over $K$. Let $\lambda_i \in F$ such that $\sum_i \lambda_i v_i = 0$. We can find $e_j \in F$ linearly independent over $K$ such that $\lambda_i = \sum_j \alpha_{ij} e_j$, with $\alpha_{ij} \in K$. Now from $\sum_{i,j} e_j \alpha_{ij} v_i = 0$ we deduce that $\sum_i \alpha_{ij} v_i = 0$, for every $j$. From the independence of $v_i$'s over $K$, we have $\alpha_{ij} = 0$, so $\lambda_i = 0$. $\square$

Now consider a field extension $K \subseteq F$ and a matrix $A \in M_n(K)$. Let $\mu_K$ and $\mu_F$ the minimal polynomials of $A$ over $K$ and $F$, respectively. Considering $I, A, A^2, \dots, A^r$ in the vector space $M_n(K)$, from the proposition you have $\deg \mu_K \leq \deg \mu_F$. On the other hand it is clear that $\mu_F$ divides $\mu_K$. So $\mu_F = \mu_K$.

7

Written before/while the OP was adding his/her own proof, which is essentially the same as what follows.

Let $\mu_{\mathbb{R}}(x)$ be the minimal polynomial of $A$ over $\mathbb{R}$, and let $\mu_{\mathbb{C}}(x)$ be the minimal polynomial of $A$ over $\mathbb{C}$.

Since $\mu_{\mathbb{R}}(x)\in\mathbb{C}[x]$ and $\mu_{\mathbb{R}}(A) = \mathbf{0}$, then it follows by the definition of minimal polynomial that $\mu_{\mathbb{C}}(x)$ divides $\mu_{\mathbb{R}}(x)$.

I claim that $\mu_{\mathbb{C}}[x]$ has real coefficients. Indeed, write $\mu_{\mathbb{C}}(x) = x^m + (a_{m-1}+ib_{m-1})x^{m-1}+\cdots + (a_0+ib_0),$ with $a_j,b_j\in\mathbb{R}$. Since $A$ is a real matrix, all entries of $A^j$ are real, so $\mu_{\mathbb{C}}(A) = (A^m + a_{m-1}A^{m-1}+\cdots + a_0I) + i(b_{m-1}A^{m-1}+\cdots + b_0I).$ In particular, $b_{m-1}A^{m-1}+\cdots + b_0I = \mathbf{0}.$ But since $\mu_{\mathbb{C}}(x)$ is the minimal polynomial of $A$ over $\mathbb{C}$, no polynomial of smaller digree can annihilate $A$, so $b_{m-1}=\cdots=b_0 = 0$. Thus, all coefficients of $\mu_{\mathbb{C}}(x)$ are real numbers.

Thus, $\mu_{\mathbb{C}}(x)\in\mathbb{R}[x]$, so by the definition of minimal polynomial, it follows that $\mu_{\mathbb{R}}(x)$ divides $\mu_{\mathbb{C}}(x)$ in $\mathbb{R}[x]$, and hence in $\mathbb{C}[x]$. Since both polynomials are monic and they are associates, they are equal. QED


So, yes, your argument is correct.

  • 0
    This is a very pedagogical and crystal clear answer, at the most elementary and explicit possible level.2018-04-03
2

As Andrea explained, the statement in the question results immediately from the following one.

Let $K$ be a subfield of a field $L$, let $A$ be an $m$ by $n$ matrix with coefficients in $K$, and assume that the equation $Ax=0$ has a nonzero solution in $L^n$. Then it has a nonzero solution in $K^n$.

But this is obvious, because the algorithm giving such a solution (or its absence) depends only on the field generated by the coefficients of $A$.