5
$\begingroup$

Let $V$ be a finite dimensional vector space, $T \in L(V)$ a linear map whose matrix $M(T)$ is the same for every basis of $V$. Show that $T$ is a scalar multiple of the identity map $I$.

I know it has to do with something about the vectors being linearly independent in a basis but I don't know where to go with that when trying to find a contradiction

  • 2
    @bolzano no, $AM=MA$2017-01-04
  • 0
    @bolzano I believe you mean we have $AMA^{-1} = M$ for every invertible $A$, or $AM=MA$ for every invertible $A$ unless I'm missing something. Certainly the condition you gave seems to imply that $M=0$.2017-01-04

6 Answers 6

1

Hint: The condition "$M$ is the same for every basis of $V$" can be written:

$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$

ie. $M$ is in the center of $\mathbf{GL}_n(K)$.

You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).

0

We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.

Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th column multiplied by $c$.

Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.

You can prove similarly that the matrix is diagonal.

In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.

The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.

0

If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.

Now, assume a map $M:V\rightarrow V$

$Mv=w$

If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.

Multiply both sides of $Mv=w$ by $X$ to get

$XMv=Xw$

The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.

0

Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$

which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$

After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$

Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.

To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} & m_{1,2} & m_{1,3} & m_{1,4} \\ m_{2,1} & m_{2,2} & m_{2,3} & m_{2,4} \\ m_{3,1} & m_{3,2} & m_{3,3} & m_{3,4} \\ m_{4,1} & m_{4,2} & m_{4,3} & m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} & m_{1,1}-m_{2,1}-m_{2,2} & -m_{2,3} & -m_{2,4} \\ 0 & m_{2,1} & 0 & 0 \\ 0 & m_{3,1} & 0 & 0 \\ 0 & m_{4,1} & 0 & 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$

0

Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.

Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.

Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.

In other words, $M=\lambda\cdot 1$.

0

Here is a (mostly) 'matrix free' approach:

Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.

Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.

Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$.

Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.

Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.