6
$\begingroup$

This question is really several short general questions to clear up some confusions. We'll start with where I'm at:

An endomorphism $\phi$ is a map from a vector space $V$ to itself. After choosing a basis $\mathcal{B}$ of $V$, one may determine a matrix to represent such an endomorphism with respect to that basis, say $[\phi]_{\mathcal{B}}$.

Question 1) If given a matrix without a basis specified, could you deduce a unique endomorphism it corresponds to? (my lean is no)

Question 2) In a similarity transform, say $A=SDS^{-1}$ where $D$ is diagonal, $S$ is the change of basis matrix from one basis to another. My question is, since $D$ is diagonal does that mean the matrix $D$ is the same endomorphism as $A$ with respect to the standard basis in $\mathbb{R^{n}}$. Or are we unable to determine which bases are involved if only given the matrices.

Question 3) Given a matrix related to an endomorphism, is it possible to determine the basis used to represent the endomorphism. (Lean yes)

Overarching Question) I am trying to understand what happens under similarity transforms. I understand we input a vector, a change of basis is applied to it, the endomorphism is applied with respect to the new basis, and then it is changed back to the old basis, but my confusion relates to the construction of the similarity matrix. If a matrix is diagonalizable, then $S$ turns out to be the eigenvectors arranged in a prescribed order. Why is this! Why do the eigenvectors for the endomorphism $\phi$ with respect to one basis act as a change of basis matrix, and what basis to they go to? This is really part of question 2. Does this send the vectors to the standard basis? Or some other basis that just happens to diagonalize the endomorphism.

Thanks!

  • 0
    Yup, looks good.2012-06-21

3 Answers 3

1

Remark:

Let $n=\dim V$, $A$ a given matrix $n \times n$ with entries in commutatif field $\mathbb K$ and let us considere the map : $f_A : \mathbb K^n \to \mathbb K^n $ such that , if $X=(x_1,...,x_n) \in \mathbb K^n$ then $f_A(X)=A \;^tX$ we can proof that $A$ is the matrix of $f_A$ on canonical basis of $\mathbb K^n$ , thus $B_0=(e_i)_{1 \leq i \leq n}$ where $e_i=(\delta_{ij})_{1 \leq j \leq n}$ and $\delta_{ij}=0$ if $i \neq j$ and $\delta_{ij}=1$ if $i=j$

If $B$ is some basis of $V$ there exists a unique isomorphisme $\phi : V \to \mathbb K^n$ sush that $\phi(B)=B_0$ then we have : $g_{\phi}= \phi^{-1} \circ f_A \circ \phi$ is the unique map from $V$ to $V$ represented bay $A$ on the basis $B$.

Conversly all linear map from $V$ to $V$ represented by $A$ on some basis $B$ of $V$ has the for $ g_{\phi}$ where $\phi$ is some isomorphisme from $V$ to $\mathbb K^n$. Precisely $\phi$ is the unique isomorphism from $V$to $\mathbb K^n$ who trnsforms the basis $B$ of $V$ to the canonical basis $B_0$ of $\mathbb K^n$.

2

1) No. Any choice of basis determines an endomorphism that the matrix corresponds to and in general different choices of basis will give different endomorphisms. (They are, of course, all similar.)

2) No. Consider the case that $A$ is not diagonal. Two matrices have the same entries if and only if they represent the same linear transformation with respect to a fixed basis.

3) No. This is the same question as 1).

4) To see this concretely, write out the condition $AS = SD$ for $D$ a diagonal matrix explicitly. To see this abstractly, let $T$ be a linear transformation with respect to a basis $e_i$, and suppose it has a basis $v_i$ of eigenvectors with eigenvalues $\lambda_i$. Then $T$ is diagonal with respect to the basis $v_i$. If $S$ denotes the linear transformation which sends $v_i$ to $e_i$, then $STS^{-1}(e_i) = ST v_i = S \lambda_i v_i = \lambda_i S v_i = \lambda_i e_i$

so $STS^{-1}$ is diagonal with respect to the basis $e_i$. Now, the above is a statement about linear transformations which is independent of basis. Writing everything above in terms of the basis $e_i$ gives you a corresponding statement about matrices, and in that statement $S^{-1}$ is more or less by definition the matrix whose columns are the entries of $v_i$ (with respect to the basis $e_i$).


I have often thought that elementary linear algebra would be less confusing if it were made explicit that when changing bases one is really working with two vector spaces; first the original vector space $V$ one cares about and second the concrete vector space $\mathbb{C}^n$ where $n = \dim V$ with its distinguished basis. A basis for $V$ is then equivalent to the choice of an isomorphism $f : \mathbb{C}^n \to V$ and changing bases corresponds to changing the choice of this map. Crucially, there are two natural ways to do this: either precompose with an automorphism $\mathbb{C}^n \to \mathbb{C}^n$ or postcompose with an automorphism $V \to V$. The two give the same result, but the notion of sameness here is itself dependent on the choice of $f$.

In category theory, one says that the finite-dimensional vector space (over a fixed field) of a given dimension is unique up to isomorphism, but not unique up to unique isomorphism, and so when identifying different vector spaces one must keep track of the identifications one is using or else risk getting hopelessly lost.

In other words, when dealing with objects that are isomorphic but for which the isomorphism is not unique, it is better to behave as if they are different objects even if they are in some sense "the same."


It might help to think of the same linear transformation with respect to two different bases as operating on two different data types. That is, with respect to a given basis $\mathcal{B}$, the corresponding matrix should perhaps be thought of as a function which accepts and spits out "$\mathcal{B}$-type" vectors, and with respect to a different basis $\mathcal{C}$ accepts and spits out $\mathcal{C}$-type vectors. These operations are compatible but to specify the compatibility you need to typecast from $\mathcal{B}$-type vectors to $\mathcal{C}$-type vectors and back again, and this is exactly what the change-of-basis matrix is supposed to do.

The change-of-basis matrix therefore has different input and output types.

  • 0
    Okay, I see what you mean. I'm going to try a few examples today and see it in action. It was too late last night, thanks.2012-06-21
2

Qiaochu's answer is a great treatment but I think #3 was dismissed a little hastily.

As I read it, #1 and #3 are not the same: much more is given for #3, namely you get both the matrix and a transformation and the fact that they are related.

So my reading of #3 is:

If we are told that a matrix $B$ is a matrix for transformation $f$ in terms of an unknown basis $\beta$, can the elements of $\beta$ be recovered explicitly?

Fix any basis you like, call it $\alpha$, and express $f$ as a matrix $A$ corresponding to that basis. Since we know they correspond to the same transformation, we know $A$ and $B$ are similar via a nonsingular matrix $X$. This matrix $X$ is a change of basis matrix, which converts between the coefficients in terms of mystery basis $\beta$ and those of our known basis $\alpha$.

Either $X$ or its inverse will convert from $\beta$ to $\alpha$ depending on how you set it up, so let's just pick the one which goes this direction and relabel it $X$. Let's also assume we have been using matrix multiplication on the left of column vectors.

To find $\beta_1$, for example, compute $Xe_1$ where $e_1$ is the column unit vector with $1$ in the first spot. (That is the representation of $\beta_1$ in basis $\beta$.) The output is the coefficients of $\beta_1$ represented in terms of $\alpha$, so you may just arrange these coefficients in front of your known basis elements and add everything up to get $\beta_1$. The $\beta_i$ are recovered using the other unit vectors $e_i$.

I suppose there could be some practical difficulty, but theoretically this seems sound. I cannot tell if the last paragraph is addressing this or not.

Added: Jack Schmidt pointed out to me the leak in my idea: $X$ is not uniquely determined, because, for example, $B$ could be self-similar by a nonidentity matrix $Y$ and then $A=X^{-1}BX=(YX)^{-1}B(YX)\dots$ so #3 is still an underdetermined problem in general. It's still more determined than #1, but not as determined as I wanted it to be.

  • 0
    @JackSchmidt Thank you, I see that the potential for $B$ to be self-similar by a nonidentity invertible matrix, and that complicates things. OK I'm convinced that the problem is still underdetermined, even with the added information. I'll have to stick in a note.2012-06-21