2
$\begingroup$

What is the difference between a basis for the domain and a basis for the codomain?

Until now, I was under the impression that the basis you choose for the domain must also be the basis for the codomain. Of course, we can convert from one basis to another, but I thought that the conversion must take place for both the domain and the codomain.

Take the following linear transformation:

$T:{\bf R}^2\to{\bf R}^2$ given by $T(v)=2v$. Take $\{\,(2,3),(4,5)\,\}$ as the basis for the domain and the standard basis for the codomain. In this case, the matrix representing $T$ would be $\pmatrix{4&8\cr6&10\cr}$.

Apparently, the above example has a different bases for the domain and codomain. However, this distinction is still not clear to me.

What would the above example look like if the bases were reversed -- if the standard basis was a basis for the domain and $\{\,(2,3),(4,5)\,\}$ was a basis for the codomain?

I would greatly appreciate it if people could please take the time to elaborate on this concept and help me understand it.

3 Answers 3

1

For your first example, $v_1 = (2,3)$ and $v_2 = (4,5)$ is the basis for the domain and $w_1 = (1,0)$ and $w_2 = (0,1)$ is the basis for the codomain.

When you say that the matrix is $ \left( \begin{array}{cc} 4 & 8 \\ 6 & 10 \end{array} \right) $, what you really mean is that $$T(v_1) = 4w_1 + 6w_2,$$ $$T(v_2) = 8w_1 + 10w_2.$$

Equivalently, any vector in the domain can be written as $v = a_1 v_1 + a_2 v_2$ for some coefficients $a_1, a_2 \in \mathbb R$. Its image under $T$ can be written as $T(v) = b_1 w_1 + b_2 w_2$ for some coefficients $b_1, b_2 \in \mathbb R$. These coefficients are related by the matrix equation, $$ \left( \begin{array}{cc} b_1 \\ b_2 \end{array} \right) = \left( \begin{array}{cc} 4 & 8 \\ 6 & 10 \end{array} \right) \left( \begin{array}{cc} a_1 \\ a_2 \end{array} \right)$$

Is this enough of an explanation for you to work out the correct matrix when the bases reversed?

  • 0
    I still don't understand. Can you please elaborate?2017-02-28
  • 0
    $T(v_1) = T(2,3) = (4,6) = 4(1,0)+ 6(0,1)$.2017-02-28
  • 0
    $T(v_2) = T(4,5) = (8,10) = 8(1,0) + 10(0,1).$2017-02-28
  • 0
    But isn't that exactly the same as what you wrote above? What is the difference between the reversed version and the original one?2017-02-28
  • 1
    $T(w_1) = T(1,0) = (2,0) =-5(2,3) + 3(4,5) = -5v_1 + 3v_2 $2017-02-28
  • 1
    $T(w_2) = T(0,1) = (0,2) = 4(2,3) - 2(4,5) = 4v_1 - 2v_2.$2017-02-28
  • 1
    So the matrix this way round is $\left( \begin{array}{cc} -5 & 4 \\ 3 & -2 \end{array} \right)$2017-02-28
  • 0
    The numbers in the matrix come from reading off the coefficients.2017-02-28
  • 0
    @kennywong I found this question, if you have time please answer my confusion. my question is, when you do T(e1) you produce a vector (2,0) in R^2, because it is in R^2 you can represent it as linear combination of the basis v that is also in R^2 right? But why (2,0) is not in column for matrix Transformation and when it considered to be column of matrix transformation? Thankyou!2018-03-09
2

Here's the definition of the matrix of a linear map taken straight from Sheldon Axler's Linear Algebra Done Right, Third Edition:

Suppose $T\in \mathcal L(V,W)$ and $v_1, \dots, v_n$ is a basis of $V$ and $w_1, \dots, w_m$ is a basis of $W$. The matrix of $T$ with respect to these bases is the $m$-by-$n$ matrix $\mathcal M(T)$ whose entries $A_{j,k}$ are defined by $$Tv_k = A_{1,k}w_1 + \cdots + A_{m,k}w_m.$$

So, as you can see, when constructing a matrix from a linear map you have to choose a basis for the domain and codomain first. But they certainly don't have to be the same -- in fact if $V\ne W$ then they couldn't possibly be the same.


Here's how we apply this to your example to construct a matrix. First apply $T$ to each vector in your domain's basis:

$$T(1,0) = 2(1,0) = (2,0) \\ T(0,1) = 2(0,1) = (0,2)$$

Then we have to expand these in the basis for the codomain. In this case we have

$$T(1,0) = (2,0) = -5(2,3) + 3(4,5) \\ T(0,1) = (0,2) = 4(2,3) -2 (4,5)$$

Compare this to the definition above (written for a $2\times 2$ matrix):

$$Tv_1 = A_{1,1}w_1 + A_{2,1}w_2 \\ Tv_2 = A_{1,2}w_1 + A_{2,2}w_2$$

Then we see that the coefficients given above become the elements of our matrix in this specific way $${\mathcal M}(T) = \begin{bmatrix} -5 & 4 \\ 3 & -2\end{bmatrix}$$

2

In my opinion, it's simpler to explain in a general context. So suppose you have a linear map from a vector space $E$ to a vector space $F$, and this linear map is represented by a matrix $A$ when $E$ and $F$ are equipped with bases $\mathcal B$, $\mathcal C$ respectively.

Now if we change bases to $\mathcal B'$, $\mathcal C'$, denote $P$ the change of basis matrix from $\mathcal B$ to $\mathcal B'$, and $Q$ the change of basis matrix from $\mathcal C$ to $\mathcal C'$ is $\;A'=Q^{-1}AP$.

Remember $P$ is the matrix of the identity map from $(E, \mathcal B')$ to $(E, \mathcal B)$, and $Q$ is the matrix of the identity map from $(F, \mathcal C')$ to $(F, \mathcal C)$.

We have the matrix $A$ of the linear map from $(E, \mathcal B')$ to $(F, \mathcal C)$. We want the matrix $B$ of the linear map from $(E, \mathcal B)$ to $(F, \mathcal C')$. Consider the commutative diagram: \begin{alignat}{2} (&E, \mathcal B')&\xrightarrow{~ A~~}~&(F, \mathcal C)\\ & \llap{P^{-1}}\uparrow\Vert&&\downarrow\Vert\rlap{Q^{-1}}\\ (&E, \mathcal B)&\xrightarrow{~ B~~}~&(F, \mathcal C') \end{alignat} You deduce at once that $\;B=Q^{-1}AP^{-1},$ and in the present case, since $E=F$ and $P=Q$: $$B=P^{-1}AP^{-1}.$$