1
$\begingroup$

I'm reading Frediberg, Insel and Spence's Linear Algebra. In section 2.4: the change of coordinate of a matrix, they gave a particular example of a matrix that generates a change of basis, I suppose. I haven't verified whether the initial set of vectors form a basis, but let's assume so WLOG.

They then go on to mention:

Note that $Q$ [the matrix in question] equals $[I_{V}]_{\beta'}^{\beta}$, where $I$ denotes the identity transformation on $V$ [and the beta's denote the bases].

Here's the matrix:

enter image description here

I'm not sure why they say that the matrix is the identity transformation. Doesn't by definition, the identity transform vector each vector to itself. This matrix doesn't do that because it maps the vector $(1,1)^{T}$ to $\frac{1}{\sqrt{5}}(1,3)^{T}$.

What am I missing out on?

  • 0
    $\beta$ and $\beta'$ are your bases in the initial space and in the arriving space. Let's suppose $\beta = \{v_1,v_2\}$ and $\beta' = \{w_1,w_2\}$. This means $Qv_1=w_1$ and $Qv_2 = w_2$. Do you see now?2017-01-25
  • 0
    @Exodd Absolutely but how does this define the matrix to be the representation of the identity transformation? If it were, shouldn't we have $Qv_i = v_i$?2017-01-25
  • 0
    As Johnathan Grant explains in the comment to his below, a matrix is a *representation* of a linear map, not the map itself, and that representation depends on the choice of basis for both the input and output space of the map. This is no different from a vector $v$ having different representations as a coordinate tuple in different bases: the *vector* is the same, but the representation of that vector might be different in different bases.2017-01-25

1 Answers 1

0

One way to think of a basis is as a fixed linear isomorphism $\phi:V\to \mathbb{R}^n$. So given a basis $(\alpha_1,\ldots,\alpha_n)$ and a basis $(\beta_1,\ldots,\beta_n)$ we have two isomorphisms $\phi_\alpha:V\to \mathbb{R}^n$ and $\phi_\beta:V\to \mathbb{R}^n$, where $\phi_\alpha(\alpha_i)$ is sent to the element $(0,\ldots,0,1,0,\ldots,0)$ with the $1$ in the $i$th coordinate, and similarly for $\beta$.

A matrix is a linear map $\mathbb{R}^n\to \mathbb{R}^n$. The fact that $Q$ is a change of basis matrix means it is equal to the composite $\phi_\beta \circ \phi_\alpha^{-1}:\mathbb{R}^n \to \mathbb{R}^n$. So the composite $\phi_\beta^{-1} Q \phi_\alpha$ is the identity on $V$, which is what is meant by $Q=[\operatorname{Id}_V]^\beta_\alpha$.

  • 0
    Yes, the composite you've defined is the identity, not $Q$ itself, right? The text hasn't introduced similarity transforms but anticipating them, one can make the assertion you've made. Is it the case the book uses bad notation: $Q$ most definitely doesn't map every vector to itself?2017-01-25
  • 1
    So the idea is that the matrix $Q$ is the identity on $V$ where the source is written with basis $\alpha$ and the target with basis $\beta$. So it is not the identity matrix, but it is the identity map $V\to V$ where the source has basis $\alpha$ and the target has basis $\beta$. It is important to remember the distinction between linear maps and matrices: matrices represent linear maps only when a bases have been chosen.2017-01-25
  • 0
    Perfect. I was able to understand it. Thanks.2017-01-25