0
$\begingroup$

How to do this in general? Is it true that if some transformations are given, and the inputs to those form a basis, that that somehow shows this? If yes, why?

Also see How to prove there exists a linear transformation?

Ok this seemed to be not clear. The answer in the above mentioned question is, because $(1,1)$ and $(2,3)$ form a basis. Why is that so?

Thank you

  • 0
    @gerry I updated at the bottom..2012-10-17

2 Answers 2

1

What I think you may be trying to ask is something like this: given a basis $v_1, \ldots, v_n$ of a vector space $V$ and vectors $w_1, \ldots, w_n$ in a vector space $W$, is there a linear transformation $T$ from $V$ to $W$ such that $T v_j = w_j$ for all $j$? And the answer is yes. Each $x$ in $V$ can be written in a unique way as $x = \sum_{j=1}^n c_j v_j$ for some scalars $c_1, \ldots, c_n$, and we define $T x = \sum_{j=1}^n c_j w_j$. It is not hard to prove that this defines a linear transformation that satisfies the required conditions.

1

If the question is, why do $(1,1)$ and $(2,3)$ form a basis for ${\bf R}^2$, then answer is that those two vectors are linearly independent, and they span ${\bf R}^2$, and that's the definition of a basis. So: are you having trouble showing the two vectors are linearly independent? are you having trouble showing they span ${\bf R}^2$?

EDIT: Evidently the question actually is, why is it that $(1,1)$ and $(2,3)$ being a basis for ${\bf R}^2$ implies that there is a linear transformation $T:{\bf R}^2\to{\bf R}^3$ such that $T(1,1)=(1,0,2)$ and $T(2,3)=(1,-1,4)$. The answer is, you don't actually have to know that the two vectors form a basis, you just have to know that they are linearly independent. In general, given $v_1,\dots,v_n$ in a vector space $V$, and $w_1,\dots w_n$ in a vector space $W$, if $v_1,\dots,v_n$ are linearly independent, then there is a linear transformation $T:V\to W$ such that $T(v_i)=w_i$ for $i=1,\dots,n$. It goes like this:

First, any linearly independent set can be extended to a basis $B=\{\,v_1,\dots,v_n,v_{n+1},\dots,v_m\,\}$ (or $B=\{\,v_1,\dots,v_n,v_{n+1},\dots\,\}$ in the infinite-dimensional case).

Then, any vector $v$ in $V$ has a unique expression as a linear combination of basis vectors, $v=\sum c_iv_i$ (where, in the infinite-dimensional case, all but finitely many of the $c_i$ are zero).

Then, we can define a map $T:V\to W$ by $T(\sum c_iv_i)=\sum c_iT(v_i)$ where $T(v_i)=w_i$ for $1\le i\le n$ and $T(v_i)=0$ for $i\gt n$.

Now all that's left to do is to prove that $T$ as defined is linear, but that's absolutely standard --- you just show $T(ax+by)=aT(x)+bT(y)$ for all $x,y$ in $V$ and all real $a,b$ (if you're working over the reals --- adjust the field of scalars to suit).

  • 0
    A transformation T such that T(1,1) = (1, 0, 2) and T(2,3) = (1, -1, 4)2012-10-17