2
$\begingroup$

When I have a linear operator $T:V\rightarrow W$ and images for some ordered basis $B=(b_1,\dots,b_n)$ of $V$, what do I have exactly if I put those images in a matrix? Is it by default:

$[T]^B_E = \begin{bmatrix} T(b_1) & \dots & T(b_n) \end{bmatrix}^B_E$

where the output is according to the standard basis of $W$? What are those images if I don't take their coordinate vectors?

  • 0
    If you have a basis for $W$ (and that this basis is $E$), then the matrix is represented as you have shown, yes. (I am a little tired so I am not sure if the $B$ or the $E$ goes below the other though. If you are sure that they are okay, don't mind me ; I was just telling myself I don't remember which goes where.) Note that "there are no standard basis in $W$". There is no such thing as a "standard basis" in an abstract vector space. You only have such luxury in $K^n$ in general when $K$ is a field (or $\mathbb R^n$, if you like concrete examples better).2012-11-08
  • 0
    In our notation we put the input basis on top, the output basis on bottom.2012-11-08
  • 0
    So until I take coordinate vectors of the images I've got abstract images? But those images are "in" $W$, but not according to any basis? I'm having trouble getting my head around that.2012-11-08
  • 2
    You should get used to thinking about vectors as things that exist without necessarily being expressed numerically in a basis. Just like an object weighs independently of your choosing a unit to measure its weight by. Vectors could be complicated objects like functions satisfying some differential equation, or whatever. You don't know or need to know this; the power of linear algebra is that you can used the same techniques independently of what vectors actually are.2012-11-08
  • 2
    What do you mean by "put those images in a matrix"? Those images may not be column vectors. They are just elements of some vector space $W$. They might be polynomials or symmetric matrices or something strange like that; you might not be able to just put them into a matrix.2012-11-08
  • 0
    So what's going on when I have a set of images from say $T:R^m\rightarrow R^n$ of a basis $B$ and I put the basis vectors and their images in a matrix to find an explicit formula by using row operations on the matrix to change from $B$ to $I$: \begin{bmatrix} b_1 & T(b_1) \\ b_2 & T(b_2) \\ b_3 & T(b_3) \end{bmatrix} Imagine $I$ on the left now and some linear combination of the images on the right. I then add the row together and get an explicit formula for $T$. What's going on there?2012-11-08
  • 2
    @Robert : I'm not sure I explicitly understand your last question, but to answer the other one, when you look at examples of abstract vector spaces, you can understand what it means to have "no standard basis". For instance, take $V$ to be the vector space of functions from $[0,1]$ to $\mathbb R$ that are of the form $ax+b$ on the interval $[0,1/2]$, of the form $cx+d$ on the interval $[1/2,1]$, and are continuous (i.e. the endpoints match at $1/2$). Can you think of a "standard basis" for this vector space?... This is pretty much the point.2012-11-08
  • 0
    @Robert: Well, this matrix you just presented, has $m$ dimension vectors as entities on the left column, and has $n$ dimension vectors on the right column. For me it is not that problematic in itself, but, usually a matrix has *scalars* as entities..2012-11-08
  • 1
    @PatrickDaSilva What I'm talking about is a method for solving problems of this type: http://math.stackexchange.com/questions/183077/deducing-formula-for-a-linear-transformation2012-11-08

1 Answers 1