1
$\begingroup$

My book has a proof of the the following theorem:

Two finite-dimensional vector spaces $V$ and $W $are isomorphic $\iff$ they are of the same dimension.

They prove this in one direction by saying

Assume $V $and $W$ have dimension $n$. An arbitrary vector in $ V$ can be represented as $v = c_1v_1 + c_2v_2 + ... + c_nv_n$

And you can define a linear transformation $T: V \rightarrow W$ as follows

$T(v) = c_1w_1 + c_2w_2 + ... + c_nw_n$

I'm confused on this last sentence. Are they just choosing to define the transformation that way? Because usually a transformation would be something like $T(v) = 2v$ or something, where you plug in the value of $v$ and get a result. But their equation has $w$s on the right-hand side, not $v$s, so what am I supposed to be plugging in? Or are they just trying to say that obviously $T(v)$ is going to map to some vector in $W$, so of course it can be represented as a linear combination of the vectors in the basis for $W$? But shouldn't a transformation show how $w$ and $v$ are related? It seems like they're saying that $T(v)$ can equal any vector in $W$, but I thought that's what they were trying to prove in the first place, so why are they just assuming it? Sorry if my thought process is confusing, I'm just confused on what the book is trying to say.

The proof goes on to say

It can be shown that this linear transformation is both one-to-one and onto.

And ends there.

How can it be shown that this transformation is one-to-one and onto? I mean, I guess I can see how it's onto, since they defined $T(v)$ to be just any vector in $W$ (still don't get why they can do that, though). But why one-to-one? They didn't even show how v maps into $w$ so I don't get how we could know that.

I guess I'm just missing what they're trying to say here.

1 Answers 1

1

Let's write this proof in more detail. I think you're getting confused by the terseness of the argument.

Assume $V$ and $W$ have dimension $n$. Choose a basis for $V$: $\{v_1, v_2, \ldots, v_n\}$. Choose a basis for $W$: $\{w_1, w_2, \ldots, w_n\}$. Note that both bases have $n$ elements, because $V$ and $W$ are $n$-dimensional.

Now let's define $T$ as follows. Given $v \in V$, we must define a vector $T(v) \in W$. Write $v = c_1 v_1 + c_2 v_2 + \ldots + c_n v_n$. There is exactly one way to choose the $c$'s, because $\{v_1, v_2, \ldots, v_n\}$ is a basis. Then define $T(v) = c_1 w_1 + c_2 w_2 + \ldots + c_n w_n$.

Is this clearer now? The last paragraph gives the recipe for computing $w$ from $v$.

  • 0
    That does make more sense -- it didn't even occur to me that the c's were supposed to be the same in both equations2017-01-31