0
$\begingroup$

Let $V = W = M_{2\text{x}2}$ and let

$$B_v = B_w = \bigg \{ \left[\begin{array}{l}1&0\\0&0\end{array}\right],\left[\begin{array}{l}0&1\\0&0\end{array}\right],\left[\begin{array}{l}0&0\\1&0\end{array}\right],\left[\begin{array}{l}0&0\\0&1\end{array}\right] \bigg \}$$

be their respective bases.

Let the linear operator $T$ be defined as:

$$T(A) = \frac{1}{2}(A+A^T)$$

Write down the matrix $T(B_v,B_w)$

Apologies for the lack of attempt. I seem to fail to understand what the question is asking. I'm just not sure what $T(B_v,B_w)$ is. I know $T$ is an operator for matrices, but I'm not sure how to proceed when the input is two bases.

1 Answers 1

1

As a function, $T: \mathbb{R}^{2\times2}\longrightarrow\mathbb{R}^{2\times2}$. You can check that $T$ is linear, so that makes it a linear transformation.


In general, consider the following. Let $U,V$ be finite dimensional vector spaces and $F:U\longrightarrow V$ be a linear map. Given an ordered basis $\alpha$ of $U$ and an ordered basis $\beta$ of $V$, we may write down a matrix $M=M_F(\alpha,\beta)$ that represents the map $F$ in the following sense:

Each $u\in U$ can be written as a linear combination of the elements of $\alpha$ in a unique way; the coefficients of this linear combination are the coordinates of $u$ in base $\alpha$, and we call these $[u]_\alpha$. Analagously, we may do the same for $v \in V$ with $\beta$, and call these coordinates $[v]_\beta$.

Example: Consider the canonical basis of $\mathbb{R}^2$ and let $u=(2,6)$ in this basis. Now, let $\alpha=\{(1,1),(1-1)\}$ be another basis for $\mathbb{R}^2$. Notice the basis is ordered, with $\alpha_1=(1,1)$ and $\alpha_2=(1,-1)$. We have that $u=4\alpha_1-2\alpha_2$, so $[u]_\alpha={(4,-2)}$.

The matrix $M$ is such that $M\cdot{[u]}_\alpha={[Fu]}_\beta$, meaning that when you multiply a vector $u$ written in base $\alpha$ by $M$, you get $Fu$ written in base $\beta$. We will construct $M$ one column at a time.

Indeed, let $e_i=(0,\dots,0,1,0,\dots,0)$, where the $1$ appears at position $i$. You can check that for any matrix $A$, $Ae_i$ is simply the $i$-th column of $A$. Now, how does $M$ act on $e_i$? Observe that $e_i={[\alpha_i]}_\alpha$, hence

$$Me_i=M{[\alpha_i]}_\alpha=[F\alpha_i]_\beta$$

In words, this means that the $i$-th column of $M$ is obtained by taking the $i$-th element of base $\alpha$, applying $F$ to it, then writing it down in base $\beta$.


Now, with all of this out of the way, let's tackle the problem at hand.

Since $\mathbb{R}^{2\times2}$ is four-dimensional, the matrix $T(B_v,B_w)$ will be $4\times4$. We name the matrices of $B_v=B_w\,$,

\begin{align} &b_1=\left[\begin{array}{l}1&0\\0&0\end{array}\right]&&b_2=\left[\begin{array}{l}0&1\\0&0\end{array}\right]\\ &b_3=\left[\begin{array}{l}0&0\\1&0\end{array}\right]&&b_4=\left[\begin{array}{l}0&0\\0&1\end{array}\right] \end{align}

so that $B_v=B_w=\{b_1,b_2,b_3,b_4\}$, in this order. Now, remember, the $i$-th column of $T(B_v,B_w)$ will be the $i$-th element of $B_v$, transformed by $T$, and then written in base $B_w$, that is

The $i$-th column of $T(B_v,B_w)$ is $Tb_i$ written in base $B_w$

Calculating the $Tb_i$'s:

\begin{align} &Tb_1=\left[\begin{array}{l}1&0\\0&0\end{array}\right]&&Tb_2=\left[\begin{array}{l}0&\frac12\\\frac12&0\end{array}\right]\\ &Tb_3=\left[\begin{array}{l}0&\frac12\\\frac12&0\end{array}\right]&&Tb_4=\left[\begin{array}{l}0&0\\0&1\end{array}\right] \end{align}

Now, writing them in base $B_w$:

\begin{align} &[Tb_1]_{B_w}=(1,0,0,0)&&[Tb_2]_{B_w}=\left(0,\frac12,\frac12,0\right)\\ &[Tb_3]_{B_w}=\left(0,\frac12,\frac12,0\right)&&[Tb_4]_{B_w}=(0,0,0,1) \end{align}

Finally:

$$T(B_v,B_w)=\pmatrix{1&0&0&0\\0&\frac12&\frac12&0\\0&\frac12&\frac12&0\\0&0&0&1}$$

  • 0
    Thank you for the helpful answer. I just have one question. You mentioned that $u=4\alpha_1-2\alpha_2$, so $[u]_\alpha={(4,2)}$. why is it $(4,2)$ instead of $(4,-2)$?2017-02-06
  • 0
    Oops, that was a typo! You are correct, it should have a minus sign.2017-02-06
  • 0
    Apologies, just one more question. You mentioned that the $i$-th column of $T(B_v,B_w)$ will be the $i$-th element of $B_v$, transformed by $T$, and then written in base $B_w$, that. Is the rule always like that? i.e. the $i$-th element of the first input of $T$ will be transformed by $T$ and then written in base of the second input of $T$?2017-02-06
  • 0
    It is always like that, in the sense described above. $T(B_v,B_w)$ is a **matrix** that *represents* the **transformation** $T$, as I described. Notice that the transformation $T$ itself does not depend on a basis whatsoever. The matrix $T(B_v,B_w)$ always satisfies this: $$\text{The $i$-th column of $T(B_v,B_w)$ is the $i$-th element of $B_v$, transformed by $T$, and then written in base $B_w$}$$ This doesn't mean it's the only way to obtain $T(B_v,B_w)$, only that it's always true.2017-02-06