2
$\begingroup$

Maybe this is a really 'stupid' question or my thoughts are running wild after studying to much, but I have the following example exam question:

Is the following statement true or false (provide proof or a counterexample): Let $V$ be a vector space of finite dimension with $L : V \rightarrow V $ and $K : V \rightarrow V$ diagonalizable linear transformations. $L$ and $K$ are equal if and only if the have the same eigenvalues and the same eigenspaces.

Well the '$\implies$' is easy to proof. It's the '$\impliedby$' that bothers me.

Consider the following linear transformations: $$ L : \mathbb{R}^2 \rightarrow \mathbb{R^2} : (x,y) \mapsto (-x,y)$$

and

$$ K : \mathbb{R}^2 \rightarrow \mathbb{R^2} : (x,y) \mapsto (x,-y)$$

It's easy to find the following:

  • Eigenvalues of $L$: $\lambda_1 = 1$ and $\lambda_2 = -1$
  • Eigenspaces of $L$: $E_{\lambda_1} = span((1,0))$ and $E_{\lambda_2} = span((0,1))$

and

  • Eigenvalues of $K$: $\lambda_1 = 1$ and $\lambda_2 = -1$
  • Eigenspaces of $K$: $E_{\lambda_1} = span((0,1))$ and $E_{\lambda_2} = span((1,0))$.

Well now one could argue that they have the same eigenvalues and the same eigenspaces (they both have eigenvalues 1 and -1 and both have a eigenspace $span((1,0))$ and $span(0,1)$) but are obviously not the same transformation. However, I'm pretty sure the statement above is true.

Should this be clarified in the question or are eigenspaces 'inseparable' from their eigenvalue? I'm quite sure you can't 'disconnect' the eigenspace from its eigenvalue, but in my opinion the question is inherently vague in that aspect.

  • 0
    They are inseparable.2017-01-23
  • 0
    Welp, that's what I thought, thanks anyway.2017-01-23

1 Answers 1

3

The point of decomposition into eigenspaces (which is what diagonalisation is about) is to try to understand the action of an operator (linear transformation of the space to itself) in terms of its action on certain well chosen subspaces. These need to be invariant subspaces for the operator (otherwise there is no action of the operator on the subspace), and one needs sufficiently many subspaces to completely capture the action of the origin operator. The latter point will be satisfied if the sum of the subspaces equals the whole space, since then every vector can be decomposed as sum of components in the subspaces, the action on the subspaces determine where the individual components go under the operator, and the original vector goes to their sum (by linearity of the operator). In general one moreover wants a direct sum, so the the decomposition of a vector into components can be done in a unique way (this is rarely a constraint, as one anyway wants to choose the smallest possible subspaces whose sum gives the whole space, to keep things as simple as possible).

For the case of a diagonalisable operator, the situation is as good as it can get: the eigenspaces form a direct sum that equals the whole space (the is what diagonalisable means), and on each of which the operator acts by multiplication by a scalar (namely the eigenvalue for the subspace) which is as simple as an action can get. So to answer your question: if one knows all the eigenspaces with their eigenvalues, and if the sum of these subspaces is the whole space, then the operator is completely determined. It acts on any vector by multiplying each of its components on the eigenspaces by the corresponding eigenvalue, mapping the vector to their sum.

As you see I said the same "eigenspaces with their eigenvalues", not "eigenspaces and eigenvalues". Obviously changing the eigenvalue associated to an eigenspace always changes the operator, in particular permuting the eigenvalues among a fixed set of eigenspaces.

  • 0
    Thank you very much for answering such a 'noobie' question in such a detailed manner ;)2017-01-23